I was struck with Lev Manovich’s description of “direct visualisation,” especially after hearing about the “generous interfaces” of Mitchell Whitelaw from Katherine Bode. Now I’m not sure which of these terminologies I prefer, but I will use direct visualization for the remainder of this post, as I think Manovich’s idea gets closer to my own project, which I will eventually talk about. I still think there are problems with direct visualization. Some of these problems are brought up in Denis Wood’s article, “A Map is an Image Proclaiming its Objective Neutrality.” In particular, I think that often visualizations (of all kinds) participate in the same kind of rhetoric of objectivity that Wood associates with (official) maps. While direct visualization promises to visualize in a way that is less abstracted from the original material, the name itself belies the reduction that is necessary in projects like Cinema Redux, which Manovich describes. This naming is problematic, perhaps arguing for Whitelaw’s more spacious “generous interfaces” over direct visualization.
But Manovich pushes his description of direct visualization further. When describing what direct visualization would look like without reduction/sampling, he says:
But you have already been employing this strategy [direct visualization] if you have ever used a magic marker to highlight important passages of a printed text. Although text highlighting is not normally thought of, we can see that in fact it is an example of ‘direct visualisation without sampling’. (44)
When compared with generous interfaces, this description (without sampling) seems to be a difference in kind, rather than a difference in degree. This is where I begin to think about using this terminology in my own project. Despite the fact that we would not be spatially representing anything, would it be possible to present this project in terms of mapping? Or as a form of visualization? Can direct visualization be deployed as a tactical term to get students thinking productively about what text encoding affords? Perhaps this is a connection that doesn’t need to be made. I think the key aspect I want to get at here is something Bethany Nowviskie asserts when talking about Neatline:
Neatline sees humanities visualization not as a result but as a process: as an interpretive act that will itself – inevitably – be changed by its own particular and unique course of creation. Knowing that every algorithmic data visualization process is inherently interpretive is different from feeling it, as a productive resistance in the materials of digital data visualization. So users of Neatline are prompted to formulate their arguments by drawing them.
The act of drawing is productive in a way that abstract thinking about drawing cannot be. Already in my composition classes, I ask students to create abstract models of writing. This primarily takes two forms: first, ‘reverse outlining’ their own papers to prepare for revision; second, generating abstract outlines of examples of writing (student and otherwise) to use as models for their own writing. I use these methods because they work. These activities help students organize their writing. But they also help students understand how to speak back to texts. Understanding the underlying structure can help them get at how an argument is functioning and help them critique that argument. My argument with using TEI/XML is that we could formalize these abstractions, and make them an explicit part of the composition process. Since I am fundamentally interested in TEI markup as an explicit interpretive act, I want to make the markup visible, even in the final product. This is akin to Manovich’s idea of highlighting-as-visualization. Though the TEI was not developed as a visual medium, it can be adapted to such a project through XSLT.
Even if my conception of mapping/visualization of texts is problematic, there is also a pragmatic ethos to Nowviskie’s assertion that is very seductive to me. And one that I think applies to textual encoding as a scholaraly activity inasmuch as it applies to deep mapping. To make TEI markup the principal form in which students compose texts, there will be moments that are difficult. One can imagine having whole classes dedicated to deciding between the deployment of one or another tag. The interpretive process of TEI is certainly one of the primary features that makes it a productive tool for scholarship. Eventually, though, the students will have to choose one option and roll with it of they want to complete their composition. They simply can’t endlessly discuss the relative benefits and drawbacks of a particular tag. This grounding in the material (i.e. the tags) seems to enable both abstract and concrete thinking with respect to a text in a way that I think will be productive in a composition classroom.