Direct visualization as/is a tactical term

I was struck with Lev Manovich’s description of “direct visualisation,” especially after hearing about the “generous interfaces” of Mitchell Whitelaw from Katherine Bode. Now I’m not sure which of these terminologies I prefer, but I will use direct visualization for the remainder of this post, as I think Manovich’s idea gets closer to my own project, which I will eventually talk about. I still think there are problems with direct visualization. Some of these problems are brought up in Denis Wood’s article, “A Map is an Image Proclaiming its Objective Neutrality.” In particular, I think that often visualizations (of all kinds) participate in the same kind of rhetoric of objectivity that Wood associates with (official) maps. While direct visualization promises to visualize in a way that is less abstracted from the original material, the name itself belies the reduction that is necessary in projects like Cinema Redux, which Manovich describes. This naming is problematic, perhaps arguing for Whitelaw’s more spacious “generous interfaces” over direct visualization.

But Manovich pushes his description of direct visualization further. When describing what direct visualization would look like without reduction/sampling, he says:

But you have already been employing this strategy [direct visualization] if you have ever used a magic marker to highlight important passages of a printed text. Although text highlighting is not normally thought of, we can see that in fact it is an example of ‘direct visualisation without sampling’. (44)

When compared with generous interfaces, this description (without sampling) seems to be a difference in kind, rather than a difference in degree. This is where I begin to think about using this terminology in my own project. Despite the fact that we would not be spatially representing anything, would it be possible to present this project in terms of mapping? Or as a form of visualization? Can direct visualization be deployed as a tactical term to get students thinking productively about what text encoding affords? Perhaps this is a connection that doesn’t need to be made. I think the key aspect I want to get at here is something Bethany Nowviskie asserts when talking about Neatline:

Neatline sees humanities visualization not as a result but as a process: as an interpretive act that will itself – inevitably – be changed by its own particular and unique course of creation.  Knowing that every algorithmic data visualization process is inherently interpretive is different from feeling it, as a productive resistance in the materials of digital data visualization. So users of Neatline are prompted to formulate their arguments by drawing them.

The act of drawing is productive in a way that abstract thinking about drawing cannot be.  Already in my composition classes, I ask students to create abstract models of writing. This primarily takes two forms: first, ‘reverse outlining’ their own papers to prepare for revision; second, generating abstract outlines of examples of writing (student and otherwise) to use as models for their own writing. I use these methods because they work. These activities help students organize their writing. But they also help students understand how to speak back to texts. Understanding the underlying structure can help them get at how an argument is functioning and help them critique that argument. My argument with using TEI/XML is that we could formalize these abstractions, and make them an explicit part of the composition process. Since I am fundamentally interested in TEI markup as an explicit interpretive act, I want to make the markup visible, even in the final product. This is akin to Manovich’s idea of highlighting-as-visualization. Though the TEI was not developed as a visual medium, it can be adapted to such a project through XSLT.

Even if my conception of mapping/visualization of texts is problematic, there is also a pragmatic ethos to Nowviskie’s assertion that is very seductive to me. And one that I think applies to textual encoding as a scholaraly activity inasmuch as it applies to deep mapping. To make TEI markup the principal form in which students compose texts, there will be moments that are difficult. One can imagine having whole classes dedicated to deciding between the deployment of one or another tag. The interpretive process of TEI is certainly one of the primary features that makes it a productive tool for scholarship. Eventually, though, the students will have to choose one option and roll with it of they want to complete their composition. They simply can’t endlessly discuss the relative benefits and drawbacks of a particular tag. This grounding in the material (i.e. the tags) seems to enable both abstract and concrete thinking with respect to a text in a way that I think will be productive in a composition classroom.




[…] [This post is re-published from an invited response to a February 2014 MediaCommons question of the week: "How can we better use data and/or research visualization in the humanities?" I forgot I had written it! so thought I would cross-post it, belatedly, to my blog. Many thanks to Kevin Smith, a student in Ryan Cordell's Northeastern University digital humanities course, for reminding me. Read his "Direct visualization as/is a tactical term," here.] […]

Kevin– I’m intrigued by your assertion that “the TEI was not developed as a visual medium.” It’s especially interesting given recent discussions DSG/NULab members have had regarding procedural vs. descriptive markup. I know you have the Wendell Piez article on this topic, and that may be an interesting place to start.

Perhaps instead of thinking about TEI as a visual medium (through XSLT etc.) it may be useful to think how it inherently is (taking your title and running with it). After all, we don’t really encode TEI without the hope of some form of visual output.

Just recently, I helped out with an undergraduate TEI assignment, and it was fascinating seeing how their use of TEIBP and css impacted their markup. This ranged from cute things like styling the documents so that references to decay rendered in a nasty olive green to actually reformatting nested structures to ensure that certain words rendered in certain ways. In a sense, the way to productively close discussions about “relative benefits and drawbacks of a particular tag” was often the visualization. I don’t know what exactly your takeaway is (in regards to your project), but I thought you may find it useful!

Thanks for the comment. I’ve been carrying around a copy of that Piez article all week I just haven’t gotten to it yet. I agree with you, and your discussion of the TEI assignment you completed is really helpful. What I was trying to get at, I guess, is that usually the visual output we imagine in an encoding project doesn’t make the markup visible in the way that I imagine. And what I failed to describe is that I would like to make the markup a part of the visual output in a way that may not be conventional for TEI assignments. Still, your description of how the undergraduate project actually played out makes me think about this. I was thinking that I would have to develop something other than the TEIBP for this, but maybe having students work from there and customize is a better approach. Either way, first I have to focus on the schema customization.

Comments are closed.