I’m proud to say that clement greenbot, a Markov chain-based textual analysis of the art critic Clement Greenberg, is live and tweeting. Every few hours, it tweets randomly constructed sentences that, in terms of the probability of one word following another, are probabilistically similar to Clement Greenberg’s writing (specifically “Avant-Garde and Kitsch,” “Collage,” “The Cross-Breeding of Modern Sulpture,” and “Modernist Painting”).
Some things I learned:
Random isn’t always random. Wrapping your head around Hans Arp’s process for a work like the one above, Untitled (Collages with Squares Arranged according to the Laws of Chance) (1917), it becomes apparent quickly that Hans Richter’s account of that work’s genesis–that Arp took a sheet of paper, laid it on his studio floor, and then took another sheet, regarded it for a moment, and “finally [he] tore it up, and let the pieces flutter to the floor of his studio”–seems unlikely to be the whole story. Did he select only those objects that made slight modifications to a gridded composition? Did the air currents flutter just so on that day? And so on. This was true for clement greenbot, too: walking through Greenberg’s corpus randomly produced some interesting results, but it also produced so much noise that it wasn’t clear Greenberg’s work hid in there at all.
Really, it’s not random. These essays have a few distinctive phrases that mean if clembot walks into the their first words (“ineluctable”), the next is sure to follow (“flatness”). If you watch, “ineluctable flatness” comes up with a striking frequency. While this makes the text distinctly Greenbergian, it also made for some weird moments where I thought I had truly made my bot brilliant, only to discover that it was just good ‘ol Clement doing that. Likewise, when clembot walks into a list of artists or poets (common especially in Greenberg’s writing on modernist painting), it usually finishes the whole thing.
No, it really isn’t random. Finally there is the problem of the simple legibility of the sentences generated. They needed a little cleaning of their punctuation and of how long they were. While always having the sentences end with a period is a fine result for my purposes, it does point out the limits of pure chance as a way to analyze a text.
Anyway, it was fun to build (many thanks to Shabda Raaj, whose Markov example chain code is baked into clembot) and sort of fun to read. As a method of analysis of humanistic data, submitting a text to chance still seems to me like a useful approach, but I’m not entirely sure if the Markov chain itself is the best way to achieve this. It may be, for example, that simple chance, or a differently-weighted probablistic approach would be more insightful for a given set of data.
Source: Clement Greenberg, Arranged According to the Laws of Chance