Exploratory markup and composing for recomposition

This post is a continuation of a previous post: (Re)orienting TEI in the classroom: Notes on an evolving project. My goals here are two-fold: first, to work through Wendell Piez’s brilliant article, “Beyond the ‘descriptive vs. procedural’ distinction” (in Markup Languages: Theory and Practice 3.2 (2001): 141-172) and how his ideas connect to my project; second, I’d like to try to make some connections from markup theory and DH to rhetoric and composition.

Piez’s article is extremely useful in orienting oneself towards the idea and function of markup languages. He argues for a more complex, matrix-like characterization of markup languages that resists the simple binary of procedural vs. descriptive. Basing his new distinction first on the role of validation in various markup languages, Piez designates “exploratory, mimetic markup” (151) as that markup which proceeds from the object (or the instance) to the model. This “bottom-up” (152) tagging seeks as faithfully as possible to encode that which is interesting to the encoder, or that which is interesting in the document being encoded. The model emerges after the iterative tagging process has concluded.

Two connections are readily apparent to my own research. The first is a connection (largely, though not completely unacknowledged in Piez’s article) to grounded theory (GT) as applied by scholars in composition and rhetoric (for a useful overview of GT as applied to rhetoric and composition, see: Farkas, K., & Haas, C. (2012). A grounded theory approach for studying writing and literacy. In K. M. Powell & P. Takayoshi (Eds.), Practising research in writing studies (pp. 81-95). New York, NY: Hampton Press). GT is generally applied to large, unwieldy, and unpredictable data sets; unstructured survey responses, interview transcriptions, and text artifacts about which there is no a priori knowledge are common data types for which a grounded approach is appropriate. A model “emerges” as the more exploratory codes cultivated in the open coding phase are consolidated. And the theory is “grounded” such that the claims (whether these are genre distinctions, or types of responses, etc.) are grounded ontologically in data. Like Piez’s description of exploratory markup, the process is necessarily iterative and cooperative—open coding, pulling in data, consolidating codes, memo writing—the justification for code distinctions are hashed out between researchers. Piez, to his credit, seems to recognize the viability of exploratory markup in the social sciences:

It could prove to be a useful methodology in psychology, sociology, economics—any study with a complex and manifold data set—and a source of hitherto-unthought-of ontologies and modeling techniques. (Piez 152)

Indeed, and it is the process of formalization that produces self-reflexive methodological choices that, as Julia Flanders argues, arise out of our creation of digital models of physical artifacts, “productive: not of forward motion but of that same oscillating, dialectical pulsation that is the scholarly mind at work” (I’ve written more extensively on this topic in this post). There may be an additional benefit to an exploratory markup scheme in this process: that it is a built-in way of making your data public and accessible. Once you formalize the markup scheme, you could publish it as-is to allow others to easily read, interpret, and manipulate your data (in XML) across platforms and for manifold purposes. This openness is, hopefully, generative of that vacillation Flanders so rightly points out.

The second connection I see with this portion of Piez’s article is to the precursors to my project. But before we talk about Trey Conatser and Kate Singer, Piez’s argument needs to be teased out a bit more. As Piez claims, “descriptive” is really an insufficient way of describing markup languages like the TEI. Instead, he calls TEI, and languages like it, “generic markup languages,” occupying a middle ground between strict validation and exploratory markup. This is a productive middle ground for Piez, as a certain amount of validation is helpful for scalability and processing. So if his first axis for describing markup languages has to do with validation (from strict to exploratory), he adds a second axis between the prospective- and retrospective-ness of markup languages (158). That is, the degree to which a markup language seeks to describe an existing object (retrospective) or the degree to which a markup language “seeks to identify a document’s constituent parts as a preliminary to future processing” (Piez 158). The deceit of generic markup languages are that they are purely descriptive (or purely retrospective), while, in actuality, there is always some engagement with procedural concerns; looking forward to processing, prospective (Piez 153-5). Piez usefully outlines this tension:

So generic markup involves us in a strange paradox. It forgoes the capability of controlling behavioral “machine” semantics directly, but wins, in return, a greater pliability and adaptability in its applications for human expression. (Piez 162)

This is the middle ground that makes something like TEI compelling for use in the classroom. A customized TEI schema seems to be the perfect apparatus through which the complex work of marking-up and displaying student-authored documents can proceed. Importantly, for my purposes, Piez designates markup languages as

far more than languages for automated processing: they are a complex type of rhetoric working in several directions at once, often in hidden ways. Inasmuch as markup systems then may begin to resemble other textual systems (such as literary canons or conventional genres), it is reasonable to turn to rhetorical or literary critical theory for explanations, or at least higher-level characterizations of them. (Piez 162-163)

Piez goes on to describe just the kind of peculiar rhetorical situation embodied in markup languages:

One thing that does need to be observed here, however, is that in markup, we have not just a linguistic universe (or set of interlocking linguistic universes) but also a kind of “rhetoric about rhetoric.” That is, markup languages don’t simply describe “the world” — they describe other texts (that describe the world). (Piez 163)

Compelling. And interesting. It makes one wonder how Piez’s conception (or inception?) of rhetoric about rhetoric shifts when the other texts are written by those composing the texts that describe them? Or when they are one and the same text, as in the case of composing directly in markup? Piez proceeds to apply the rhetorical terms prolepsis and metalepsis to markup languages. Piez’s argument is that the productivity of TEI (and generic languages like TEI) arise in the tension/slippage between its proleptic and metaleptic characteristics. TEI tries to be retrospective while also benefitting from strict validation schemes (looking forward).

It is retrospective tagging for prospective purposes: thus, it works by saying something about the past (or about the presumed past), but in order to create new meaning out of it. (Piez 167)

And now we can start to think about how this relates to Singer and Conatser. The two authors take very different approaches to the use of XML/TEI markup in their undergraduate courses. On the one hand this makes sense—Singer was teaching a course of poetics while Conatser was teaching within the confines of a (seemingly) strict writing program. Both were adapting the TEI for new purposes. I’ve written pretty extensively about Conatser’s project before, so I won’t go into much detail here beyond my primary point of departure: that, instead of dictating the schema for writing assignments as Conatser does, I’d like to think about how building or adding to an emergent customized TEI schema as necessary tags arise can benefit the metacognitive awareness of students as they compose. In this way, I am adopting a form of Piez’s “exploratory markup,” though with some differences since students will be encoding their own texts.

In her article, “Digital Close Reading For Teaching Poetic Vocabularies,” Kate Singer justifies exploratory tagging in her approach, though she does not explicitly define it as such, by calling attention to the shortcomings of a top-down model of markup—that is, one that proceeds from the model (or schema) to the tags:

If TEI users have been less interested in interpretive than editorial markup, perhaps it is because TEI appears to read both structure and meaning too definitively into a text that should be a resource or open set of data for any scholar to interpret at will. Yet, this may be more the fault of our blindness to the ways in which markup may act not as formalizing structures but can mark, instead, moments of textual instability, suggestiveness, or ambiguity. This method would pinpoint moments of figuration that educe both semantic or non-semantic abstraction. Certain TEI tags, precisely because of their ambiguity, became generative in such a way.

Singer’s analysis of the discussions that took place in her course exemplify this claim. Decisions to mark up line groups led the students to think about the historical codification of the stanza, and how these particular poems push back on this definition for these particular women writers. This is what is so compelling about markup as a scholarly act. Not that formalization will lead to interesting conversations about formalization as such—though that will certainly happen—but that encoding will lead to discoveries about the original objects of study.

We can also see Piez’s assertions of a prospective/retrospective slippage or tension play out in Singer’s course through the way that markup is actually taken up by the students:

As mentioned above, students were especially gunning for the point at which they could transform their underlying encoding to “the real edition.” The disjunction between the encoding “signified” or genotype and the transformed document “signifier” or phenotype presented some important practical and theoretical issues (Buzzetti 2002). Students began to ask why TEI-encoded websites didn’t always have a button or function to display encoding to savvy readers, as another type of annotation or editorial self-awareness. (Singer)

The focus on the end user/visualization is something that, in my limited experience, I have already noticed is a major concern for students using TEI/XML for the first time. What does this do? How will people interact with this text? And I agree with Singer’s students—why isn’t the markup available for savvy readers? In some cases, like the Dorr Rebellion project, there is this option. TEI projects hosted on TAPAS will also have built-in mechanisms for accessing the TEI that are beyond right-click → View Source. The foregrounding of the markup as a visible analytical language will be central to the ultimate form of visualization for my project.

While her approach is very useful to me in conceiving my approach, Singer is ultimately invested in another traditional humanities methodology—close reading. The final project in her class was not a scholarly edition of poetry, but rather a hermeneutical paper based around the close reading of texts. In the end, is Singer privileging a ‘high criticism’ over the ‘low criticism’ of scholarly edition production? Does the re-inscribe the divide between the two scholarly activities? What she seems to be after is more than an acknowledgment or embodiment of how the edition/database/visualization is interpretive, instead she wants that interpretive work to inform the activity of close reading. This is an interesting departure that speaks to the differences between the goals for our projects. I would like to retain the open and interpretive tagging that Singer’s approach employs while retaining the goal of metacognitive awareness that Conatser aims for. Maybe the only difference is that instead of a scholarly edition, I am asking students to create encoded portfolios of their work. Protfolios that would be neither institutional, nor commercial.

Another approach, as opposed to ePortfolio—which, in my experience, often struggle with audience—may be to ask students to compose for recomposition. In some ways, this is a reinscription of the kind of work scholarly editors have always undertaken—creating a work that will be used as the object of interpretation for other scholars. This may connect to a thread of writing studies that foregrounds recomposition/remix in public writing. Jim Ridolfo and Nicole DeVoss posit the term “rhetorical velocity” in their webtext introducing the project:

The term rhetorical velocity, as we deploy it in this webtext, means a conscious rhetorical concern for distance, travel, speed, and time, pertaining specifically to theorizing instances of strategic appropriation by a third party. In thinking about the concept, we drew from several definitions:

  1. Rapidity or speed of motion; swiftness.
  2. Physics: A vector quantity whose magnitude is a body’s speed and whose direction is the body’s direction of motion.
  3. The rate of speed of action or occurrence.

Combining these definitions allowed us to create a term (i.e., rhetorical velocity) that allows us to wrestle with some of the issues particular to digital delivery, along with layering in a concern for telos.

To embody their idea of composing for recomposition, Ridolfo and DeVoss deliver their text (digitally) in the form of a press release which, they argue, is a generic example of writing with remix or recomposition in mind that has been around for decades. One could argue that a scholarly edition of a text (whether print or digital) is a form of composing for hermeneutical recomposition. The edition as such is not the end goal, rather it is the scholarly activity which it enables. Is this a possible approach? How would a model of public composition for recomposition shift the way the students prospectively (and retrospectively) markup their texts. How, to use Jerome McGann’s phrase, will they “deform” their texts for this purpose? I don’t think it is possible for this kind of rhetorical velocity to arise in Conatser’s version of XML in the writing classroom. Rather a more exploratory approach to the markup is required to promote this awareness; an approach which allows a customized schema to emerge from the actually-existing tags arising from the actually-existing needs of students in composing their texts.

Thinking through theory in DH (through JDH 1.1)

This is the first (of several) posts that center around the reading list I am compiling (with Ryan Cordell) for my first comprehensive exam. I’m starting here by trying to frame the central debate around which I will construct the ‘field’ portion of my field and focus paper. I could call this debate hack vs. yack, or theory vs. practice. But (spoiler alert) I think both of those are problematic formulations of a more interesting debate of the role of theory in DH. Is the impetus on the DH scholar to make explicit the sometimes nondiscursive theoretical underpinnings of a project? Is nondiscursivity exclusionary? Or rather, is the onus on the ‘reader’ of said project to develop critical literacies capable of discerning the implicit theory of a project? What does ‘theory’ even mean in this context? What counts as DH and how are projects recognized?

These questions are generative, as I look back on this reading, of new questions for me: what is the legacy of critical editing in DH? Is the historical shift from humanities computing to digital humanities representative of something more that the release of The Blackwell Companion? How does a media framework (like Hayles and Pressman’s Comparative Textual Media) complicate the centrality of making/building/design-as-research in DH? 

But I’ll quit stalling here and get to the meat of the thing. Below is my (initial) thinking through the idea of theory in DH, mainly via the inaugural issue of the Journal of Digital Humanities (with some other stuff peppered in). Here we go.

While the idea of “hack” vs. “yack” is, in many ways, a false dichotomy—originating from an off-hand comment at the first THATCamp (Nowviskie “Origin”)—it has become a centralizing debate in the DH community. Nowviskie does well to dispel the notion that digital humanities practitioners actually subscribe to a zero-sum view of doing vs. saying, while pointing out the problems in the way this debate has been taken up (mostly by those outside the field) (“Origin”). This problematic formulation, however, sparked a useful debate around the role of theory in the digital humanities, taken up in the inaugural issue of the Journal of Digital Humanities. Natalie Cecire opens this special section by claiming, “the debates around the role of ‘theory’ in digital humanities are debates about the relationship between saying and doing” (“Introduction” 44).

As must by now be evident, I am not, for my own part, persuaded that the digital humanities’ epistemology of building is enough of a saving grace to render the hack/yack division a happy fault. My sympathies rest with bell hooks’s insistence that theory can solve problems that urgently need solving, that articulating in words how things work can liberate. I am troubled by the ease with which the epistemology of building occludes and even, through its metaphors, legitimizes digital humanities’ complicity with exploitative postindustrial labor practices, both within the academy and overseas, and I wish to see digital humanities dismantle as well as build things. And yet, as the contributions to this special section attest, the methods and metaphors of digital humanities are far from settled. What is needed is not self-flagellation (much less defensiveness) but attempts to develop the discipline within which we wish to work. (“Introduction” 49)

In her own contribution to the section, Cecire underscores the theoretical burden of established DH practitioners to make explicit the often tacit knowledge of the field (“DH in Vogue”). Here, she responds to the idea of a fundamentally nondiscursive theoretical mode argued for by Ramsay (“On Building”), Rockwell, and Scheinfeldt. As the “eternal September” (Nowviskie, “Eternal September”) of DH means rapid growth, an influx of amateurs, and institutional support, the field cannot afford to be nondiscursive. Cecire argues that it is not that DH isn’t theoretical, it is that real consequences arise when “we fail to theorize practice, or when we insist on the tacitness of our theorizing” (56).

Jean Bauer argues that the theory brought to bear on digital projects can inflect the project as a whole—from vocabulary to color choices (68). Her idea is to create new tools or design to make this theory visible, while commenter Chris Forester spells out ways that more engagement with capital-T Theory can benefit DH projects in some of the same ways it has proved productive in traditional humanities work (71-2). In some ways, Benjamin Schmidt agrees with Forester in his claim that all DH work should begin “with a grounding in a theory from humanistic traditions” (60). Schmidt appears to take issue with Ramsays idea of a hermeneutic of screwing around in his opinion that “we should have prior beliefs about the ways in which the world is structured, and only ever use digital methods to try to create works which let us watch those structures in operation” (60).

One proposed solution comes from Trevor Owens in the form of design-based research, which rests on the idea that “there is some kind of hybrid form of doing, theorizing, building, and iterating that we should turn into a methodology” (83). Owens contends that all designs have “implicit and explicit arguments inside them” (83), and that one solution to the problem of tacit knowledge is to write these arguments down. In some ways we are already doing this—projects already generate writing that spells these arguments out in the form of grant proposals, memos, documentation, etc.—we simply need to value these texts as scholarly production. In a similar move, William G. Thomas asks, “in the broadest terms […] how does scholarly practice change with digital humanities?” (64). His answers call for a reorientation of the digital archive as “intentional and interpretive” (64), which calls into question how we value collaborative scholarship and what scholarly argument actually looks like in digital form.

Elijah Meeks and Patrick Murray-John both take up the relationship between computer science/programming and the humanities. Meeks, in an attempt to resist the values of comp. sci. from being unreflectively taken up by humanists, calls for humanists to begin to

pick up coding, write weird and not-at-all pragmatic software, and, perhaps, create standards through practice or, more likely, just create lots and lots of weird code that better describes queer black artists in the twenties or a republic of letters or Walt Whitman. (80)

This seems to me to relate to Jerome McGann’s idea of “deformance,” referenced in Mark Sample’s piece on critical making in the classroom. Murray-John, on the other hand, sees a possible convergence of the computer science / humanities dichotomy through a shared value of noticing:

[a humanist] does need some training to be able to start noticing the difference between two data models that at surface appear to describe the same things. And, coders should be ready to learn what useful things theorists can offer that, despite a first appearance of scope creep, might just be valuable things to consider building into the code. (77)

That this noticing is two directional is reminiscent of McPherson’s call for humanistic theory to feed back into the computational models and tools that it uses for DH projects, rather than simply adopting tools and practices which reinscribe problematic worldviews:

In extending our critical methodologies, we must have at least a passing familiarity with code languages, operating systems, algorithmic thinking, and systems design. We need database literacies, algorithmic literacies, computational literacies, interface literacies. We need new hybrid practitioners: artist-theorists, programming humanists, activist-scholars; theoretical archivists, critical race coders. We need new forms of graduate and undergraduate education that hone both critical and digital literacies.

Moya Bailey speaks to a certain kind of noticing through her discussion of recognition. She argues that there has been lots of critical feminist work that incorporates the digital, but is not recognized as DH. It is therefore important to resist an “add and stir” model of diversity in DH by

meeting people where they are, where people of color, women, people with disabilities are already engaged in digital projects, there’s a making of room at an already established table. (Bailey)

More than a facile recognition of this work as DH, this can take the field in new and productive directions that may expose, examine, and resist the implicit structural identities of the field (be they ableist, white, male, capitalist, etc.). It is these problematic worldviews that are central to the role of theory in the digital humanities. The question of “what counts?” is similarly taken up by Fiona M. Barnett in her article “The Brave Side of the Digital Humanities” through a close reading of the #AntiJemimas blogs. As with Bailey, Barnett emphasizes the openness of DH as a field. That this stuff is literally being hashed out in real time—at conferences, in blogs, and via Twitter. As Alexis Lothian outlines in JDH, the impetus behind #transformDH is a response to just this openness. DH can be transformed, and maybe through a loose and messy confederation of scholars over Twitter. As such, Lothian posits #transformDH as one possible answer to the question, “what do we mean when we say ‘theory’ in DH?”

 

 

[Sorry for the messy references! The logic here is to cite the entire JDH volume followed by other citations in chronological order]

JDH 1.1

Natalie Cecire, “Introduction: Theory and the Virtues of Digital Humanities

——–, “When Digital Humanities Was in Vogue

Ben Schmidt, “Theory First

William G. Thomas, “What We Think We Will Build and What We Build in the Digital Humanities

Jean Bauer, “Who You Calling Untheoretical?

Patrick Murray-John, “Theory, Digital Humanities, and Noticing

Elijah Meeks, “Digital Humanities as Thunderdome

Tom Scheinfeldt and Ryan Shaw, “Words and Code

Trevor Owens, “Please Write it Down: Design and Research in Digital Humanities

Mark Sample, “Building and Sharing (When You’re Supposed to be Teaching)

Alexis Lothian, “Marked Bodies, Transformative Scholarship, and the Question of Theory in Digital Humanities

Peter Bradley, “Where are the Philosophers? Thoughts from THATCamp Pedagogy

Tim Sherratt, “It’s All About the Stuff: Interfaces, Power, and People

Moya Z. Bailey, “All the Digital Humanists are White, All the Nerds are Men, but Some of Us Are Brave

Other stuff:

Bethany Nowviskie, “On the origin of ‘hack’ and ‘yack’”, blog post

Stephen Ramsay, “On Building”, blog post

Tara McPherson, “Why Are the Digital Humanities So White? or Thinking the Histories of Race and Computation”, from Debates in the Digital Humanities

Fiona M. Barnett, “The Brave Side of the Digital Humanities”, from differences 25.1 (2014)