This post is a continuation of a previous post: (Re)orienting TEI in the classroom: Notes on an evolving project. My goals here are two-fold: first, to work through Wendell Piez’s brilliant article, “Beyond the ‘descriptive vs. procedural’ distinction” (in Markup Languages: Theory and Practice 3.2 (2001): 141-172) and how his ideas connect to my project; second, I’d like to try to make some connections from markup theory and DH to rhetoric and composition.
Piez’s article is extremely useful in orienting oneself towards the idea and function of markup languages. He argues for a more complex, matrix-like characterization of markup languages that resists the simple binary of procedural vs. descriptive. Basing his new distinction first on the role of validation in various markup languages, Piez designates “exploratory, mimetic markup” (151) as that markup which proceeds from the object (or the instance) to the model. This “bottom-up” (152) tagging seeks as faithfully as possible to encode that which is interesting to the encoder, or that which is interesting in the document being encoded. The model emerges after the iterative tagging process has concluded.
Two connections are readily apparent to my own research. The first is a connection (largely, though not completely unacknowledged in Piez’s article) to grounded theory (GT) as applied by scholars in composition and rhetoric (for a useful overview of GT as applied to rhetoric and composition, see: Farkas, K., & Haas, C. (2012). A grounded theory approach for studying writing and literacy. In K. M. Powell & P. Takayoshi (Eds.), Practising research in writing studies (pp. 81-95). New York, NY: Hampton Press). GT is generally applied to large, unwieldy, and unpredictable data sets; unstructured survey responses, interview transcriptions, and text artifacts about which there is no a priori knowledge are common data types for which a grounded approach is appropriate. A model “emerges” as the more exploratory codes cultivated in the open coding phase are consolidated. And the theory is “grounded” such that the claims (whether these are genre distinctions, or types of responses, etc.) are grounded ontologically in data. Like Piez’s description of exploratory markup, the process is necessarily iterative and cooperative—open coding, pulling in data, consolidating codes, memo writing—the justification for code distinctions are hashed out between researchers. Piez, to his credit, seems to recognize the viability of exploratory markup in the social sciences:
It could prove to be a useful methodology in psychology, sociology, economics—any study with a complex and manifold data set—and a source of hitherto-unthought-of ontologies and modeling techniques. (Piez 152)
Indeed, and it is the process of formalization that produces self-reflexive methodological choices that, as Julia Flanders argues, arise out of our creation of digital models of physical artifacts, “productive: not of forward motion but of that same oscillating, dialectical pulsation that is the scholarly mind at work” (I’ve written more extensively on this topic in this post). There may be an additional benefit to an exploratory markup scheme in this process: that it is a built-in way of making your data public and accessible. Once you formalize the markup scheme, you could publish it as-is to allow others to easily read, interpret, and manipulate your data (in XML) across platforms and for manifold purposes. This openness is, hopefully, generative of that vacillation Flanders so rightly points out.
The second connection I see with this portion of Piez’s article is to the precursors to my project. But before we talk about Trey Conatser and Kate Singer, Piez’s argument needs to be teased out a bit more. As Piez claims, “descriptive” is really an insufficient way of describing markup languages like the TEI. Instead, he calls TEI, and languages like it, “generic markup languages,” occupying a middle ground between strict validation and exploratory markup. This is a productive middle ground for Piez, as a certain amount of validation is helpful for scalability and processing. So if his first axis for describing markup languages has to do with validation (from strict to exploratory), he adds a second axis between the prospective- and retrospective-ness of markup languages (158). That is, the degree to which a markup language seeks to describe an existing object (retrospective) or the degree to which a markup language “seeks to identify a document’s constituent parts as a preliminary to future processing” (Piez 158). The deceit of generic markup languages are that they are purely descriptive (or purely retrospective), while, in actuality, there is always some engagement with procedural concerns; looking forward to processing, prospective (Piez 153-5). Piez usefully outlines this tension:
So generic markup involves us in a strange paradox. It forgoes the capability of controlling behavioral “machine” semantics directly, but wins, in return, a greater pliability and adaptability in its applications for human expression. (Piez 162)
This is the middle ground that makes something like TEI compelling for use in the classroom. A customized TEI schema seems to be the perfect apparatus through which the complex work of marking-up and displaying student-authored documents can proceed. Importantly, for my purposes, Piez designates markup languages as
far more than languages for automated processing: they are a complex type of rhetoric working in several directions at once, often in hidden ways. Inasmuch as markup systems then may begin to resemble other textual systems (such as literary canons or conventional genres), it is reasonable to turn to rhetorical or literary critical theory for explanations, or at least higher-level characterizations of them. (Piez 162-163)
Piez goes on to describe just the kind of peculiar rhetorical situation embodied in markup languages:
One thing that does need to be observed here, however, is that in markup, we have not just a linguistic universe (or set of interlocking linguistic universes) but also a kind of “rhetoric about rhetoric.” That is, markup languages don’t simply describe “the world” — they describe other texts (that describe the world). (Piez 163)
Compelling. And interesting. It makes one wonder how Piez’s conception (or inception?) of rhetoric about rhetoric shifts when the other texts are written by those composing the texts that describe them? Or when they are one and the same text, as in the case of composing directly in markup? Piez proceeds to apply the rhetorical terms prolepsis and metalepsis to markup languages. Piez’s argument is that the productivity of TEI (and generic languages like TEI) arise in the tension/slippage between its proleptic and metaleptic characteristics. TEI tries to be retrospective while also benefitting from strict validation schemes (looking forward).
It is retrospective tagging for prospective purposes: thus, it works by saying something about the past (or about the presumed past), but in order to create new meaning out of it. (Piez 167)
And now we can start to think about how this relates to Singer and Conatser. The two authors take very different approaches to the use of XML/TEI markup in their undergraduate courses. On the one hand this makes sense—Singer was teaching a course of poetics while Conatser was teaching within the confines of a (seemingly) strict writing program. Both were adapting the TEI for new purposes. I’ve written pretty extensively about Conatser’s project before, so I won’t go into much detail here beyond my primary point of departure: that, instead of dictating the schema for writing assignments as Conatser does, I’d like to think about how building or adding to an emergent customized TEI schema as necessary tags arise can benefit the metacognitive awareness of students as they compose. In this way, I am adopting a form of Piez’s “exploratory markup,” though with some differences since students will be encoding their own texts.
In her article, “Digital Close Reading For Teaching Poetic Vocabularies,” Kate Singer justifies exploratory tagging in her approach, though she does not explicitly define it as such, by calling attention to the shortcomings of a top-down model of markup—that is, one that proceeds from the model (or schema) to the tags:
If TEI users have been less interested in interpretive than editorial markup, perhaps it is because TEI appears to read both structure and meaning too definitively into a text that should be a resource or open set of data for any scholar to interpret at will. Yet, this may be more the fault of our blindness to the ways in which markup may act not as formalizing structures but can mark, instead, moments of textual instability, suggestiveness, or ambiguity. This method would pinpoint moments of figuration that educe both semantic or non-semantic abstraction. Certain TEI tags, precisely because of their ambiguity, became generative in such a way.
Singer’s analysis of the discussions that took place in her course exemplify this claim. Decisions to mark up line groups led the students to think about the historical codification of the stanza, and how these particular poems push back on this definition for these particular women writers. This is what is so compelling about markup as a scholarly act. Not that formalization will lead to interesting conversations about formalization as such—though that will certainly happen—but that encoding will lead to discoveries about the original objects of study.
We can also see Piez’s assertions of a prospective/retrospective slippage or tension play out in Singer’s course through the way that markup is actually taken up by the students:
As mentioned above, students were especially gunning for the point at which they could transform their underlying encoding to “the real edition.” The disjunction between the encoding “signified” or genotype and the transformed document “signifier” or phenotype presented some important practical and theoretical issues (Buzzetti 2002). Students began to ask why TEI-encoded websites didn’t always have a button or function to display encoding to savvy readers, as another type of annotation or editorial self-awareness. (Singer)
The focus on the end user/visualization is something that, in my limited experience, I have already noticed is a major concern for students using TEI/XML for the first time. What does this do? How will people interact with this text? And I agree with Singer’s students—why isn’t the markup available for savvy readers? In some cases, like the Dorr Rebellion project, there is this option. TEI projects hosted on TAPAS will also have built-in mechanisms for accessing the TEI that are beyond right-click → View Source. The foregrounding of the markup as a visible analytical language will be central to the ultimate form of visualization for my project.
While her approach is very useful to me in conceiving my approach, Singer is ultimately invested in another traditional humanities methodology—close reading. The final project in her class was not a scholarly edition of poetry, but rather a hermeneutical paper based around the close reading of texts. In the end, is Singer privileging a ‘high criticism’ over the ‘low criticism’ of scholarly edition production? Does the re-inscribe the divide between the two scholarly activities? What she seems to be after is more than an acknowledgment or embodiment of how the edition/database/visualization is interpretive, instead she wants that interpretive work to inform the activity of close reading. This is an interesting departure that speaks to the differences between the goals for our projects. I would like to retain the open and interpretive tagging that Singer’s approach employs while retaining the goal of metacognitive awareness that Conatser aims for. Maybe the only difference is that instead of a scholarly edition, I am asking students to create encoded portfolios of their work. Protfolios that would be neither institutional, nor commercial.
Another approach, as opposed to ePortfolio—which, in my experience, often struggle with audience—may be to ask students to compose for recomposition. In some ways, this is a reinscription of the kind of work scholarly editors have always undertaken—creating a work that will be used as the object of interpretation for other scholars. This may connect to a thread of writing studies that foregrounds recomposition/remix in public writing. Jim Ridolfo and Nicole DeVoss posit the term “rhetorical velocity” in their webtext introducing the project:
The term rhetorical velocity, as we deploy it in this webtext, means a conscious rhetorical concern for distance, travel, speed, and time, pertaining specifically to theorizing instances of strategic appropriation by a third party. In thinking about the concept, we drew from several definitions:
- Rapidity or speed of motion; swiftness.
- Physics: A vector quantity whose magnitude is a body’s speed and whose direction is the body’s direction of motion.
- The rate of speed of action or occurrence.
Combining these definitions allowed us to create a term (i.e., rhetorical velocity) that allows us to wrestle with some of the issues particular to digital delivery, along with layering in a concern for telos.
To embody their idea of composing for recomposition, Ridolfo and DeVoss deliver their text (digitally) in the form of a press release which, they argue, is a generic example of writing with remix or recomposition in mind that has been around for decades. One could argue that a scholarly edition of a text (whether print or digital) is a form of composing for hermeneutical recomposition. The edition as such is not the end goal, rather it is the scholarly activity which it enables. Is this a possible approach? How would a model of public composition for recomposition shift the way the students prospectively (and retrospectively) markup their texts. How, to use Jerome McGann’s phrase, will they “deform” their texts for this purpose? I don’t think it is possible for this kind of rhetorical velocity to arise in Conatser’s version of XML in the writing classroom. Rather a more exploratory approach to the markup is required to promote this awareness; an approach which allows a customized schema to emerge from the actually-existing tags arising from the actually-existing needs of students in composing their texts.