[Note: This post was a DHNow Editor’s Choice on May 1, 2012.]
The research I am doing presently uses visualizations to show latent patterns that may be detected in a set of poems using computational tools, such as topic modeling. In particular, I’m looking at poetry that takes visual art as its subject, a genre called ekphrasis, in an attempt to distinguish the types of language poets tend to invoke when creating a verbal art that responds to a visual one. Studying words’ relationships to images and then creating more images to represent those patterns calls to mind a longstanding contest between modes of representation—which one represents information “better� Since my research is dedicated to revealing the potential for collaborative and kindred relationships between modes of representation historically seen in competition with one another, using images to further demonstrate patterns of language might be seen as counter-productive. Why use images to make literary arguments? Do images tell us something “new†that words cannot?
Without answering that question, I’d like instead to present an instance of when using images (visualizations of data) to “see†language led to an improved understanding of the kinds of questions we might ask and the types of answers we might want to look for that wouldn’t have been possible had we not seen them differently—through graphical array.
Currently, I’m using a tool called MALLET to create a model of the possible “topics†found in a set of 276 ekphrastic poems. There are already several excellent explanations of what topic modeling is and how it works (many thanks to Matt Jockers, Ted Underwood, and Scott Weingart who posted these explanations with humanists in mind), so I’m not going to spend time explaining what the tool does here; however, I will say that working with a set of 276 poems is atypical. Topic modeling was designed to work on millions of words, and 276 poems doesn’t even come close; however, part of the project has been to determine a threshold at which we can get meaningful results from a small dataset. So, this particular experiment is playing with the lower thresholds of the tool’s usefulness.
When you run a topic model (train-topics) in MALLET, you tell the program how many topics to create, and when the model runs, it can output a variety of results.  As part of the tinkering process, I’ve been working with the number of topics to have MALLET use in order to generate the model, and was just about to despair that the real tests I wanted to run wouldn’t be possible at 276 poems.  Perhaps it was just too few poems to find recognizable patterns.  For each topic assignment, MALLET assigns an ID number to the topic and “topic keys” as keywords for that topic.  Usually, when the topic model is working, the results are “readable†because they represent similar language.  MALLET would not call a topic “Sea,” for example, but might instead provide the following keywords:
blue, water, waves, sea, surface, turn, green, ship, sail, sailor, drown
The researcher would look at those terms and think, “Oh, clearly that’s a nautical/sea/sailing†topic, and dub it as such. My results, however, on 15 topics over 276 poems were not readable in the same way. For example, topic 3 included the following topic keys:
3Â Â Â Â Â Â Â Â Â 0.04026Â Â Â Â Â Â Â Â Â Â with self portrait him god how made shape give thing centuries image more world dread he lands down back protest shaped dream upon will rulers lords slave gazes hoe future
I don’t blame you if you don’t see the pattern there. I didn’t. Except, well, knowing some of the poems in the set pretty well, I know that it put together “Landscape with the Fall of Icarus†by W.C. Williams with “The Poem of Jacobus Sadoletus on the Statue of Laocoon†with “The New Colossus†with “The Man with the Hoe Written after Seeing the Painting by Millet.â€Â I could see that we had lots of kinds of gods represented, farming, and statues, but that’s only because I knew the poems.  Without topic modeling, I might put this category together as a “masters†grouping, but it’s not likely.  Rather than look for connections, I was focused on the fact that the topic keys didn’t make a strong case for their being placed together, and other categories seemed similarly opaque. However, just to be sure that I could, in fact, visualize results of future tests, I went ahead and imported the topic associations by file. In other words, MALLET can also produce a file that lists each topic (0-14 in this case) with each file name in the dataset and a percentage. The percentage represents the degree to which the topic is represented inside each file. I imported the MALLET output of topics and files associated with them into Google Fusion Tables and created a dynamic bar graph that collects file-ids along the vertical axis and along the horizontal axis can be found the degree that the given topic (in this case topic 3) is present in the file.  As I clicked through each topic’s graph, I figured I was seeing results that demonstrated MALLET’s confusion, since the dataset was so small. But then I saw this: [Below should be a Google Visualization.  You may need to “refresh” your browser page to see it.  If you still cannot see it, a static version of the file is visible here.]
If the graph’s visualization is working, when you pass your mouse over the lines in the bar graph, the ones that are higher than 0.4, then the file-id number (a random number assigned during the course of preparing the data) appears.  Each of these files begin with the same prefix: GS.  In my dataset, that means that the files with the highest representation of topic 3 in them can all be found in John Hollander’s collection The Gazer’s Spirit. This anthology is considered to be one of the most authoritative and diverse—beginning with classical ekphrasis all the way up to and including poems from the 1980s and 1990s. I had expected, given the disparity in time periods, that the poems from this collection would be the most difficult to group together because the diction of the poems changes dramatically from the beginning of the volume to the end. In other words, I would have expected the poems to blend with the other ekphrastic poems throughout the dataset more in terms of their similar diction than by anything else. MALLET has no way of knowing that these files are included in the same anthology. All of the bibliographical information about the poems has been stripped from the text being tested. There has to be something else. What something else might be requires another layer of interpretation. I will need to return to the topic model to see if a similar pattern is present when I use  other numbers of topics—or if I include some non-ekphrastic poems to the set being tested—but seeing the affinity in language between the poems included in The Gazer’s Spirit in contrast to other ekphrastic poems proved useful.  Now, I’m not inclined to throw the whole test away, but instead to perform more tests to see if this pattern emerges again in other circumstances. I’m not at square one. I’m at a square 2 that I didn’t expect.
The visualization in the end didn’t produce “new knowledge.â€Â It isn’t hard to imagine that an editor would choose poems that construct a particular argument about what “best†represents a particular genre of poetry; however, if these poems did truly represent the diversity of ekphrastic verse, wouldn’t we see other poems also highly associated with a “Gazer’s Spirit topicâ€? What makes these poems stand out so clearly from others of their kind? Might their similarity mark a reason for why critics of the 90s and 2000s define the tropes, canons, and traditions of ekphrasis in a particular vein? I’m now returning to the test and to the texts to see what answers might exist there that I and others have missed as close readers. Could we, for instance, run an analysis that determines how closely other kinds of ekphrasis are associated with Gazer’s Spirit’s definition of ekphrasis? Is it possible that poetry by male poets is more frequently associated with that strain of ekphrastic discourse than poetry by female poets?
This particular visualization doesn’t make an “argument” in the way humanists are accustomed to making them. It doesn’t necessarily produce anything wholly “new†that couldn’t have been discovered some other way; however, it did help this researcher get past a particular kind of blindness and helped me to see alternatives—to consider what has been missed along the way—and there is, and will be, something new in that.