When I presented at MSA 13 earlier this month, I was unsatisfied with my methods for creating network visualizations of texts.Â I knew that preprocessing automatically would not work yet, since I have yet to identify precisely how I want to designate nodes across larger bodies of poems.Â What Iâ€™ve been looking for is a way to mark texts up descriptively, using some form of markup language (XML, TEI), that would be uniform enough to render data that could be meaningfully displayed, and then to find a visualization software package with an algorithm that would â€œworkâ€ the way I wanted it to.Â The problem, of course, is that when youâ€™re a rogue DH scholar out in the world borrowing tools and using whatever tends to fall your way, then youâ€™re not going to be sure about how each tool works (unless you have a CS or social science degree that includes learning about network algorithms, which I do not have), and this is going to detract from the validity of how and what you say about your object of study.Â On the flip side, tools and text analysis software are becoming more widely available, and so doing what Iâ€™ve done, which is to say Googled â€œdiscourse network toolâ€ and finding Philip Leifieldâ€™s â€œDiscourse Network Analyzerâ€ is actually possible.Â What is remarkable about how DNA, a GUI text processing software, works is that it is designed as an interpretive tool to mark texts up in XML so that they can be displayed using free network visualizing software such as Visone, Ucinet, or Netdraw.Â The designed purpose of Leifieldâ€™s DNA software is to collect articles on a topic area and to use those articles to create network visualizations of agreement and disagreement between individuals and groups.Â For example, the sample dataset used for a tutorial on the software comes from someone at the University of Maryland named Dana R. Fischer, (I have no idea who she isâ€¦ but Iâ€™m definitely going to look her up!) who marked up articles, testimony, and other texts about climate change.Â Essentially, she could input each text into the DNA software and create a basic XML document with very minimal encoding (document type, author, dates, title) and then use DMA to select portions of text that create a â€œstatementâ€ about climate change.Â By tagging the speaker, the organization the speaker is affiliated with, and the content type â€“a restricted list of terms created by the user to describe the topic being discussedâ€”as well as whether or not the speaker agreed or disagreed with the topic) she could create networks of statements made about climate change that also included the individuals involved in the climate change debate and their organizations.Â Such a visualization helps us to understand how much any one group (say, the Senate and the EPA) agree with one another, to identify the issues on which they agree and disagree, and to also understand affiliations (which speakers are affiliated with which climate change debates).
This isnâ€™t *exactly* what I had in mind, but itâ€™s really darn close.Â The power of this particular piece of software is that I can be in charge of what constitutes an article (a poem), what constitutes a speaker (the poetic speaker, the author, the third person omniscientâ€¦ all of them), and the â€œcontentâ€ to be described.Â Granted the â€œorganizationâ€ classification is less helpful to me, but in the instance of â€œThe Venus Hottentot (1825)â€ I could differentiate between speakers from the first section of the poem from the second using this feature.Â Using the software this way does not begin to utilize itâ€™s real power, which is to read topics and speakers over large corpuses of texts in similar ways.Â For now, Iâ€™m looking at one poem; however, I could see in the future were I to take this poem and situate it in a larger public discourse about black female subjectivity, I could.Â I could import, for example, Sander Gilmanâ€™s article â€œBlack Bodies, White Bodies: Toward an Iconography of Female Sexuality in Late Nineteenth-Century Art, Medicine, and Literature,â€ which we know Elizabeth Alexander read before writing the poem.Â We could also bring in articles by Sadiah Quershi on â€œDisplaying Sara Baartmanâ€ or Terri Francisâ€™s â€œI and I: Elizabeth Alexanderâ€™s Collective First-Person Voice, the Witness and the Lure of Amnesia,â€ or chapters from Deborah Willisâ€™s Hottentot Venus 2010 and demonstrate how Alexanderâ€™s poem participates in a larger act of social recovery.
There are, as with any tool, limitations, though.Â So far, the only way to create the visualizations is using the speakers, organizations, and categories with directional lines indicating agreement or disagreement.Â I have not found a way of creating networks of â€œstatements.â€Â In other words, I have not found a way to pull a category and then visualize the network of statements about that category and how they relate to each speaker; however, I have only begun the process of creating visualizations.Â Another complication is that I have only found ways to make a statement associated with one category.Â Iâ€™m fairly certain I can find a work around for that, but for the moment, thatâ€™s not worked out; however, I will say that having to choose between regular category designations (ones of my own creation) did make me very attuned to my assumptions about the text. Â That process helped me to realize how my visualizations of these networks will always be limited and remind me that I need to make those limitations transparent when I write about what the visualization actually visualizes.
In the meantime, even though I am not teaching right now, Iâ€™m really excited about what this kind of software could mean for my students.Â In the English 101 courses at the University of Maryland, students write three linked assignment papers on a self-selected research topics.Â These are position papers, where the student must make purposeful arguments for what he or she believes in and respond to the discourse of the field in which their selected debate is ongoing.Â We generally assign an annotated bibliography as the first part of that linked assignment as a way of getting students to read the work and to then explain who agrees with each other on particular points and who disagrees.Â The hard part of this assignment is that each entry is generally 2 paragraphs long and includes only 8-10 sources, and getting the students to actually compare arguments, identifying points of agreement and disagreement is difficult.Â However, if the assignment were to use the Discourse Network Analyzer to import each article and then go through each article tagging â€œstatements,â€ â€œspeakers,â€ â€œorganizations,â€ and â€œcategoriesâ€ (for example, are the speakers arguing that a particular action should be taken or that one event causes anotherâ€¦) as well as â€œagreementâ€ or â€œdisagreementâ€ with that statement, they might begin to see how their readings create a network of ideas and by understanding who agrees and what they agree upon, the student might be better able to situate him or herself within the discourse of that issue.Â Itâ€™s an intriguing idea to me, and at some point when Iâ€™m teaching again, I think Iâ€™m going to make use of this technology.