Author Archives: Kai Prenger

Kai Prenger 04/12 Personal Journal Entry

This past week, Connie and I spent most of our time learning the ins-and-outs of Wax. My primary activity was geared toward how to change the website’s style. Working with Wax brought up some key challenges related to technology choices into focus.

Sometimes when you’re working on a web development project with a small team, progress often takes longer than you think to start, but progresses faster than expected. As much as the ideal of iterative work lives on in project planning, most of our progress has some in fits and starts. Part of the slowness to is learning new domains, as required by sourcing the visual media for our archive. But another aspect, a focus of this post, has been learning technical tool from scratch.

While the Wax documentation owns up to a learning curve being “best suited for folks who are willing to take on some technical responsibility in exchange for a lot of flexibility,” the nature of statically generated site engines is that complexity is kept as minimal as possible. In fact, a Wax workflow image represents a basic explanation of how a Wax site as built. What I didn’t anticipate was the complication of additional framework/tools bundled with this particular library.

One example that we stumbled on several times during the stylistic changes was the inheritabilty of CSS/Sass used via Bootstrap. For two people with limited front end development experience, it’s hard to understand where to change fonts, colors for background, hover-overs links and text. The documentation suggests forking the Wax demo project from their repository then “clobbering” the collection using RAKE. These actions yield a complete, working set of styling and interactivity for the website with Bootstrap, but no straight forward landing spot for the new archivist without frontend design skills to update the design of their new website.

There are three Sass files in two directories that determine the CSS compiled when the site is generated.

Three Sass files in two directories

Three Sass files in two directories

All three of these files interact and depend on each other in different ways. A couple of times during this week, we struggled to understand exactly how the update a font, or a hover over color. Should we update the font-family? What about the $body-text variable created via Sass? At best, we got a compile time error when we forget to add a semicolon. We were able to make stylistic changes in the end, but if we need to make additional changes, we might not exactly remember what we did, and whether or not all of the steps we took were necessary in the end.

Some other struggles came up beyond changing styles. We still haven’t discovered why about 20% of the images we’ve uploaded don’t render on our website. Debugging this is a challenge given the size of the Wax project.

Another slate of problems we didn’t foresee are related to performance from a developer perspective. From start to finish, generating the collection takes over two hours! If you add or remove images from a collection, it’s required to rebuild the entire collection, triggering a long process working through +500 images. In a similar vein, generating the static website currently takes more than twenty seconds locally, and six and a half minutes to be deployed. I consulted a friend who formerly used Jekyll for their personal development blog, and they claimed it told four hours to build two years worth of posts. My hunch is that this is related to Digital Humanities comfortability in shipping prototypes and smaller projects, and that our collection is a little too large for Jekyll generator, being built in Ruby, a relatively slow interpretative language. These performance issues almost led to us shifting our archive over to Hugo, a static site generator build in Go , which would ameliorate any performance issues, but we decided against it due to time constraints and the introduction to yet another topic to learn about.

As mentioned before, we will likely scale down the types of media available on the archive. Given the performance issues, we wouldn’t want to experiment heavily with embedding videos, and will opt to include those on a resources page with appropriate links. I suspect we’ll be learning more about the ins-and-outs of Bootstrap, CSS, and Sass for the next week to continue to form the website appropriately given our content. We also need to focus on furnishing the appropriate context and text for the archive to give vistors a sense of themes and purpose of the website sooner rather than later. I’m still confident we will meet our shipping date in two weeks, just that it will be a rough draft we’ll need to polish in the remaining two weeks of the class.

 

Mainframes Project Update – 2022-03-31

The mainframe project team is into week two of grinding on logistics and scaffolding for the archive. Our weekly meeting focused on two efforts: (a) identifying some missing non-technical dependencies and (b) setting up local development environments. The good news is we’re still within range of our Phase 2 milestones:

  • Decide which materials (photos, articles, books) needed for project and how it will be organized
    • Assign: Both
    • Deadline(s): 03/30
  • Zine/Documentation revision (1)
    • Assign: Connie
    • Deadline(s): April 6th

One encouraging fact about the archive is that there are 500+ images that will be available! As far as we can discern, that is a legitimate archive size to begin with! In the midst of collecting images, we weren’t entirely sure we’d have enough.

There are some desirable parts of the archive that we may need to reconsider.

For example, while the images have been curated heavily, we haven’t review enough video material yet to make those a first class part of the archive. We need not eliminate video content we’ve found. We do plan on creating a ‘resources’ page with helpful books and websites, and embedded videos could be a compromise to get the videos included, even if they aren’t curated and surfaced in the same way as the images.

As a part of the revised proposal, we suggested that some or most of the archival material would be collaged and/or under go deformance. We’re still exploring this option under the deadline for this semester.

To Do This Week

  • Fix broken images – there are a subset of images that aren’t rendering for some reason
  • Upload last half of images – we’ve uploading only half of the images, which is a time/concentration consuming process; the rest will be uploaded this week
  • Set up local dev environment – we did this last night
    • Generating and adding an ssh key to our repo
    • Cloning the repo
    • Installing ruby, non-ruby dependencies, and then installing ruby gems
    • Walking through the commands necessarily to generate and serve the website from your computer before pushing and publishing changes
  • Develop color palette / design – we have only made small changes to the design thus far; some colorblind people may not be able to see text using the current color scheme
  • Build out the navigation – this will flesh out what contextual text is missing, and where we need to fill in the gaps
  • Remove wax project cruft – there are some textual and code references to the wax project’s demo site that we need to replace 😉
  • Finalize (?) logos maybe – we have a logo, but it wouldn’t work well for say a social media avatar; we’re exploring options; maybe we don’t need to use the logo for social media?

Kai – Personal Blog 3/23

As Connie mentioned in her blog post this week, we started building the website for our archive. For me, it’s typically nerve-racking to start to show work in process. This is especially true on a subject that I find fascinating and worthy of exploration for beyond the end product due for this class. After hearing some conflicting ideas about how to proceed given the fact that we felt uncertain about the usefulness of our research and archival collection, we both agreed that trying to build something sooner might give us insight into the state of our research thus far.  One method of overcome hesitation to build/publish on my part was to channel the inner technical project/program manager bag of tricks I’ve collected with my professional experience over the last ten years.

The most influential software development practices theory over the last three decades are found under the Agile Development umbrella. As such, I found some strength to start building out the website for our archive in the  Manifesto for Agile Software Development. One value statement from that concise document I’ve always held close to me while working with software engineers is “working software over comprehensive documentation.” The principal behind this software cardinal virtue elaborated in the Principles behind the Agile Manifesto:
working software is the primary measure of progress.” One important caveat to the values espoused in the Agile Manifesto, each of which is formulated with the favored value in the left, and the diminished value on the right, is that “while there is value in the items on
the right, we value the items on the left more.” One can assert that planning and documentation play a role in the development of a digital project in the context of digital humanities. Certainly it’s true when it comes to acquiring funding for a digital initiative. Still, a plan and documentation pale somewhat to having the digital project manifest itself, in this case, as a website. Consequently, I used the agile rationale to dive head first into creating a repo and generating some version of our archive on the web using Wax.

Another source of inspiration came in the form of software engineering management wisdom gleaned from the development of IBMs OS/360, the operation system developed for the company’s System/360 mainframe. Fred Brook’s The Mythical Man-Month details the learning he culled from his experience managing the OS/360, germinating from a question IBM’s CEO asked Brooks during his exit interview on why managing a software project seemed significantly difficult in comparison to a hardware project. While the book is clearly from a particular time and place in history, some of the lessons learned feel durable even in the face of increased processing power and more ergonomic/productive programming technologies. One chapter, “Plan to Throw One Away” felt appropriate to address any hesitation I might feel in starting our web archive as soon as possible. As Brooks sees it:

In most projects, the first system built is barely usable.There is no alternative but to start again, smarting but smarter, and build a redesigned version in which…problems are solved…one has to build a system to throw away, for even the best planning is not omniscient as to get it right the first time. The management question, there, is not whether to build a pilot system and throw it away. You will do that. The only question is whether to plan in advance to build a throwaway, or to promise to deliver the throwaway to customers…Hence plan to throw one away; you will, anyhow (Brooks, 116).

In an attempt to embrace this lesson, we’ve started to assemble our metadata in the csv without overthinking it, and knowing that we won’t get the filters and categorizations right the first time, but that in doing the work will yield the lessons for a reversion. Similarly, we’re dumping our entire photo catalog into the website as a first step. While some curation has already taken plan in the creation of our collection, we need to see all of the assets in one place to understand if any of them won’t work for our archive for one reason or another. I’ve also embraced the throwaway nature of some of the initial work in the naming conventions in our code: the collection at present is called temp_collection and we reuse templates from the Wax demo project as a starting off point. Even the repo name is wax-project, obviously is nondescript name. The key motivation is to get something working, even if we’ll eventually replace a great deal of it. Once we have a website up and running, we can critique it, file bugs and changes requests to it, and create a new version that better addresses our needs.

Citations

Brooks, Frederick Phillips. The Mythical Man-Month: Essays on Software Engineering. Addison-Wesley, 1995.

Mainframes Project Update – 2022-03-22

Project update 2022-03-22

The Mainframe Project has started building out our initial version of the web archive. This means creating a website using the Wax library and setting up a publicly available git repository to allow for multiple code contributors. This stage represents a useful milestone. Why?

  • We may do work that will be thrown away, and that’s OK; it fosters learning (see near future blog posts for why this might be the case)
  • Building a website will reveal the places where our research is adequate, and where we need to need to fortify
  • Creating a concrete artifact to respond to is better than a notional planning or concept of what the archive may be
  • The end product of the project exists, even in nascent form 
  • We have a landing page to which we can point interested parties

Highlights

Information architecture

Archival projects scream out for structure. Not only is the material curated, but a visitor must be able to anchor their understanding of the collection with categorical filters. Our information architecture document is an initial attempt to understand how a visitor may navigate of the entire collection.

Metadata

We’ve started to organize our metadata. Our collection of images likely comprises a full archive of mainframe and pre-personal computing visual material. Still, we understand the archive image by image, and need to conceive of it as a collection. We’ll be dumping our entire photo catalog to get a sense of which subcategories and filters make sense for our collection. Our emphasis at this time is on producing the potential collection filterable by many criteria, rather than defining what the archive will be ahead of time by the end of the semester. Valuing “working software over comprehensive documentation” is a critical insight more than 30 years old. 

Repository

A publicly available repository is valuable for encouraging contributions from outside of the Mainframe Project’s intial contributors. Working on our project is only a pull request away. Given the current size of the project, we can support non-technical folks in adding images and scholarly and general audience blog posts. Using git/Github and Wax represents “a relatively high but general-purpose learning curve.” Learning how the web works in a generally applicable way enhances our understanding of digital production as humanities scholars. Working in a software repository helps make digital work visible.

Kai Prenger Personal Journal Entry 3/15

My thoughts on our semester-long project have been haunted by questions of sustainability after attending a skills lab hosted by CUNY’s Interactive Technology and Pedagogy (ITP) program a month ago on the subject as it relates to digital scholarship. Over the course of the workshop, Jesse Merandy, the first GC PhD candidate to complete a digital dissertation, elaborated on a central theme of what happens to digital projects in academia over time: they age, they falter, and, eventually, with changes in technologies and tools, they stop working. Beyond remarking on how a digital project will age and decade over time, Merandy also offered strategies to preserve the discovery generated by a digital academic project. In a nutshell, Jesse advocated for nth level documentation of research methods and thought processes (within the tolerances of sanity) that ideally takes place continuously throughout the project lifecycle.

While the tool selection decisions for our mainframe project attempted to counteract the speed at which digital work rots by way of statically generated content, markdown content, zero dollar hosting, and publicly available code repositories, we can’t escape the fact that eventually, our project will be hard to access as originally designed, if at all. To that end, Connie and I are focusing on how we can incorporate the process documentation to be present in or alongside our project website. Some of the subjects worth documenting and the rationale are found below

Reuse of our personal journal entry blogs

Reusing from our class commons site, to reflect what our week-to-week thoughts were about the progress, work methods, and moods about the project. We expect to do some reflection beyond our weekly journal entries will also be included, like topics for further discovery, or themes we weren’t able to address during the allotted time frame.

Narrating project challenges

Narrating the challenges we face during our time working on the project. One striking example is the difficulty in accessing physical archives for serendipitous findings. Another is encapsulating a doable topic within the time-allowing and supported by the archival material we were able to collect, both of which may not match any of the project proposal’s original themes or ideas.

Effect of time constraints

Detailing the scope and time titrations necessary to complete as coherent a project as possible. The original project imagined a staff of four to complete the resultant website. With two team members, multidisciplinary role assignments, and learning technology as well as subject matter compresses the origin “thick history” proposed before the project started. Of course, time constraints also help uncover new venues in which to explore, and conceive of novel studies. For instance, we’ve been a little surprised by how abstract and arty some of the advertisement creative turns out for what amounts to business equipment. We’re also exploring deformance and inspired collage as research methods into some of the themes we see in the cultural history of mainframe computers.

Identifying sources of inspiration

Identifying sources of inspiration for the project that aren’t constituent of the end product seems relevant for any future development. We can imagine a Zettelkasten style table for use in research down the line, as unused resources can emerge as relevant upon further research and analysis. We’ve been compiling research sources as well and inspirations from various forms of digital media .

Important side effect of documentation

One important side effect of documenting a digital humanities (DH) project from a bit of a remove: you are able to receive the cash value the enthusiasm on prototyping. That is to say, we can explicitly articulate weakness of the project output, without feeling like shortcomings are synonymous with project failure.

Mainframe project data management plan

grace_seated

Grace Hooper seated at a mainframe. angle=60,lower_threshold=0.2,upper_threshold=1.0

Types of data

The types of data present as a part of the Mainframe: Past-Present project will consist primarily of images and text. Images will be stored as JPEGs while text will be rendered from markdown. We may have video content embedded in the website, which would be stored on a publicly accessible Vimeo collection. Adobe Photoshop, After Effects, hex editors, and software libraries like pixelsort and glitch-this will be used in deformance of original archival content. During archival material collection, Dropbox will be used to share images from archives and magazines. All images and writing will hosted on a publicly visible Github repository, which will be mirrored on GitLab for backup/redundancy, and an alternative hosting platform.

Standards for documentation & metadata

Deformance process will be documented as a part of the website content. Both Wax and CollectionBuilder are informed by FAIR Principles for Library, Archive and Museum Collections. Images will follow the naming convention of description_source_year or source_volume_edition_page#_year. Directory structure and location for images and texts are prescribed by the frameworks mentioned above, which we will follow absent any argument for changing those defaults. Metadata for the collection will be available to browse on the website, and will be downloaded as a csv.

Data reuse, accessibility, sharing

Images, text, and website code will be shared in git repository hosting sites Github and GitLab, and available for cloning and forking via git. Original archival images and deformanced versions will be available for download from a web browser or via the git repo. If video content is provided in the archive, instructions for using youtube-dl will be written and available. Due to the experimental nature of this project, reuse is permissible under the FreeBSD license.

Preservation & long term access

Mainframes: Past-Present will be available on both Github and GitLab, where hosting is free. In the event that we don’t have funding for a domain name, the archives will still be accessible. If desirable, we will create a BagIt bag for the project with bagit-python to be stored at the CUNY’s Mina Rees Library.

Personal Bio / Contribution Statement

Kai Prenger graduated from Reed College in 2004 earning a BA in Literature, writing and defending a thesis called The Same But Different, an experiment applying the post-colonial theories of Homi K. Bhabha to 19th century American literature. His academic interests include misuse tools and theoretical frameworks on unexpected topics, using arbitrary process to transform subject matter (deformance)  and exploring areas deemed boring by a plurality of scholars. Previous work for pay includes dry cleaning, construction, social work, financial operations, product management, and data analysis and engineering. As a graduate student, he pursues play as productive strategy in scholarship. His main contributions to the project include development, theoretical and educational context, and shares project management duties with Connie, the co-project leader.

Revised Mainframe Project Proposal

Overview

The Mainframe Project [new title needed] is a digital collection of archival media that is manipulated through deformance and other means to correlate and complicate our understanding of social relationships and human interactions with computers, using mainframes as a point of departure. By focusing on the computing devices in the workplace before the advent of personal computers, we hope to defamiliarize some of common assumptions about computing informed by contemporary culture, and think through how the social context around computing may have evolved differently given the set of perceptions about computing from the 1950 – 1970s. Specifically, this project will probe the underlying assumptions of human computer interaction from that time period, and also the gender politics of computing in the business place before the personalization of computing. It will conduct these types of investigations by looking at archived material such as advertising, manuals, and other ephemera, and relating it to both scholarly work tangentially related to mainframes, as well as scholarly work on computer-human interaction contemporaneous with mainframes, like cybernetics.

Problem addressed

There is a dearth of (digital) humanities scholarship about mainframe computers. Antecedents can be traced in works from various “media archaeologists”: Tung-Hui Hu’s A Prehistory of the Cloud (cultural poetics of cloud computing), Kirschenbaum’s Track Changes: A Literary History of Word Processing (word processing as software), Liu’s The Laws of Cool (knowledge work), Lisa Gitleman’s  “Raw Data” is an Oxymoron (various histories and pre-histories of data), Friedrich Kittler (general theory), among a few others. But mainframe computing is an implementation detail in larger arguments, often glossed over. So much of our computing inherits key concepts from the features and limitations of the age of Big Iron.

This project may also turn its eye towards other scholarly and non-scholarly writing coterminous with the Mainframe and related to computing. Examples include Leary’s speculative work like How Have Computers Empowered Humans?, but also early scholarly efforts to understand human and machine interaction, like Ashby’s An Introduction to Cybernetics, the work of Norbert Wiener, and Bateson’s Steps to an Ecology of Mind. Books for larger audiences that retrospectively evaluate mainframe computers may also be helpful in probing some of the social and interactive nuances of mainframe computing.

Our project also involves the manipulation of visual media in the tradition of deformance as initially conceived by Lisa Samuels and Jerome McGann in Deformance and Interpretation. Scholars practicing deformance in the visual realm serve as an inspiration as well, including the many videographic deformations Singin’ in the Rain by Jason Mittel (https://vimeo.com/jmittell) and the photo glitches of Michael J. Kramer (Glitching History).

Project audience

Scholars and students interested in imaginative explorations of what it was like interacting with and encountering mainframe interfaces in person, by proxy (timesharing) and as a cultural phenomenon in popular media and the workplace. We believe there’s a requirement for educational material for non-technical audiences in the broader public due to relative unfamiliarity with the technologies involved.

Contribution and impact

The study of mainframe computing is not only a historical exercise in preservation. Because of the foreignness of mainframe computing from contemporary experience, it becomes a prism to explore larger topics related to information technology in societies. As mentioned above, we’re interested in analyzing gender dynamics and computer-human interactions during the rise of mainframe computers. This is a meaningful downpayment on other computing topics that could be explored in a digital humanities context before the introductions of minicomputers in the 1980s and the rise of personal computing in future projects.

Final product

A digital collection of augmented photographs (and potentially video) with supporting written analysis and context will be hosted on a website. Material in the collection will come from marketing, manuals and other archival artifacts.

Feasibility assessment

Tool selection

At this moment in time, the team has leaned toward Wax as the platform on which to build our digital collection. That said, we will reevaluate that tool in relationship with other options like CollectionBuilder and Omeka, depending on time constraints and time necessary for research and time manipulating, potential to include video, etc. Depending on our

Our tools for manipulating photos and videos will vary. Physical collage will be done by hand, though we reserve the right to use digital tools like Adobe’s InDesign to perform the same operation. PhotoMosh, Hex editors and Python libraries may be used in glitching, pixelsorting and other deformance techniques.

Finally, depending on the media types included in our collection, we may use a third party platform to host assets like video (i.e. Youtube or Vimeo).

Team composition

Our team is composed of two scholars (Connie Cordon and Kai Prenger) that cover most tasks in a primary/secondary structure.

  • Project Manager → Connie Cordon primary, Kai Prenger secondary
  • Developer → Kai Prenger primary, Connie Cordon secondary
  • Visual archival researcher → Connie Cordon primary, Kai Prenger secondary
  • Textual researcher → Kai Prenger primary, Connie Cordon secondary
  • Outreach → Connie Cordon primary, Kai Prenger secondary
  • Project updates → Drafted together, Kai Prenger primary, Connie Condon secondary

Barriers and challenges

We believe the biggest challenges are related to original research and tracking down archival material that meets specific subject matter, though I’d imagine university archives/websites and institutions like The Computer History Museum may help mitigate finding nothing worth writing about. We have some anxiety about how much customization we’ll want in the collection itself, leading us to evaluate whether Wax might be appropriate, or whether CollectionBuilder or even Omeka might lead to (using this gist to evaluate). Time spent developing the site in a short amount of time will be traded off against time to research and develop intriguing assets.

Pandas: The Bare Basics (NYCDW Week 2022)

I attended the “Pandas: The Bare Basics” class on Friday February 11. Lead by Patrick Smyth,  a PhD graduate of CUNY GC, and founder of Iota School, the workshop’s goal was to introduce students without programming experience to the pandas library for data analysis.

Data analysis in action

Before the workshop started, Patrick emailed attendees instructions to prepare a Jupyter notebook environment, offering two methods. For participants with a Google account looking for the least hassle, Google’s Collaboratory allows you to start up a Jupyter Notebook with all the libraries necessary installed automatically saved on your Google Drive. For Google skeptics, he also offered installing Anaconda and running a Jupyter notebook from that platform. Having some Python experience, I chose another method entirely. My previous experience with pandas is very limited to formatting JSON retrieved from an API into data frames or csvs, so i was interested in learning more about the data analysis applications for this library.

We began our journey by learning how to import pandas into our notebook, and work with the basic data structures in pandas: Series and DataFrame. Series represent one dimensional data; in Python an example would a list, which is a collection of discrete elements in an ordered manner, such that you can reference them by index. Note that in most programming, the first element is in the 0 index position. See the figure below

Python Program to Get the Last Element of the List

In a list, in this case [‘p’, ‘r’, ‘o’, ‘b’. ‘e’], you reference elements based on the index of the element. Example: getting the last element in a list without knowing how long it is, you’d request the -1 index (‘e’); getting the first element would use the index 0.

DataFrames are two dimensional data structures. The easiest commonplace example of a two dimensional data structure is a table or spreadsheet, with rows that represent individual data records, and columns that are attributes for that record. In pandas, a data frame might look like the figure below:

Python for Finance: An Easy Introduction | Analyzing Alpha

Rows are indexed by default as 0-n, columns have names identifying the attribute

The first hour of the workshop has us creating simple Python lists and dictionaries to run aggregate calculations like means, minimum and maximum values on pandas Series and DataFrames, and learning how to isolate one column in a 2 dimensional data set for analysis either by the column name or index.

The second half of the workshop was devoted towards performing exploratory data analysis (EDA) on a real dataset available online. We read in a CSV from Airbnb data, including data on rentals in the five borough of  New York City. Because there was a strong contingent of Bronx and Queens participants in this workshop, we focused on comparing the average prices in various neighborhoods from those boroughs to the average in NYC overall. We also plucked out listings that might skew our averages.

What stuck with me the most from this workshop was the instructors comment that, while data analysis like this isn’t solely constitutive of a digital humanities project, having data analysis a one supporting pillar of an argument you want to make can be surprisingly effective. He spoke about using analysis with pandas/matplotlib in the context of boosting the efficacy  of a particular program in an academic administrative presentation he gave last year.

I dropped the Jupyter Notebooks we worked on together in the workshop in a repo on Github for curious classmates.

Kai’s Skillsets

Hi! I’m Kai. I work as a Data Engineer, which is a very specialized software engineering that writes code to replicate and ingest data, transforms data semantically,  orchestrates repeated processes, and provides the DevOps/SysAdmin support and automation for all those activities. I am a recovering product manager. My academic background is in literature, especially the transatlantic 18th and 19th century sort, and medieval literature. My enrollment in the Digital Humanities program is done in the spirit of mashing up my technical skills and humanistic topics of inquiry.

In general, if any project group wants an opinion on tradeoffs between multiple technical solutions, I’m happy to provide it, as I’m generally literate in this way. Throwing that out there.

Design/UX: This realm is not really my strong suit. I would not enjoy contributing to this facet of a project.

Development: I’m a SQL expert. Although I consider myself a hacker, I can write professional data pipelines in Python (and on the JVM), and have orchestration experience with Apache Airflow to refresh and ingest new data from defined sources, and monitor that process. Fairly decent at scrapping data from API/websites too. I am interest in expanding my skills in two areas if the project I’m on allows for it:

  1. Text analysis/natural language processing, including sentiment analysis, topic modeling, word2vec and potentially machine learning to aid in the above, that would be something I’d like to help with. Python (e.g Gensim) for medium to large corpora, or Scala (e.g. Spark) for very large corpora (unlikely I think for these semester projects I think).
  2. I’ve had a recent but abiding interest in learning Ruby on Rails 7 because of its reemphasis solo developer web applications and quick time from idea to prototype. It would also be entirely new to me. If you’re project needs more features and flexibility than what CBOX offers, let’s discuss.

Documentation: I love reading and writing documentation. I’d be happy doing this for almost any project or subject matter.

Outreach: I do not have an active social media presence, having deleted my accounts years ago. That said, if it benefits the project and there’s no one suitable to do the work, I’m fine helping out in this way. Just don’t expect a large network to come in tow.

Project management: I have quite a bit of professional experience in this for go-to-market and software projects. If it’s important to track project status, I can help out in a pragmatic way by adding just enough process to help out teammates and throw off enough info to report how the project is coming along. I don’t particularly want to do this, but it’s fine if it helps out the team I’m on.

Research: I would like to become better at this. I can use Google, and backtrace through bibliographies and works cited. But I’m fuzzy on resources, and I’ve never engaged the GC library, so I feel a bit behind and rusty in terms of my research skills.