Wading Into Data

On the first day of the Building a Digital Portfolio institute, I told the group that my biggest anxiety about using digital art history methods to investigate my dissertation topic had to do with limited access to digitized data. I went on to complain in my first blog post that it is presently impossible to carry out a visualization project on fifteenth-century Naples because no structured data set exists for the topic. In the course of this past week, I have already begun to build my own data set to remedy this problem. This new project is one that I have contemplated for over a year now, and I have failed to get started because it seemed too large a project for me to undertake alone at this stage in my dissertation research. The idea of building a data set also made me uncomfortable because so many aspects of my dissertation topic lack firm documentary evidence—a point that was made even more clear to me by the word cloud of my dissertation I created using Voyant Tools, which includes the high-frequency words ‘probably’ and ‘likely.’

Screen Shot 2015-07-17 at 1.20.41 PM
Voyant word cloud of my dissertation, with the words ‘probably’ and ‘likely’ included.

Although there is much about my dissertation that I can’t responsibly confine to a single cell in a spreadsheet, I think there is a great deal that I can learn about fifteenth-century Neapolitan art, and the patronage of King Ferrante in particular, by trying to visualize the limited documentary evidence that does exist. Fortunately for the Neapolitanists of the world, Gaetano Filangieri compiled six volumes of primary source material on art of the Kingdoms of Naples and Sicily, (Documenti per la storia, le arti e le industrie delle provincie napoletane) published 1883-1891. These volumes (now available via Hathi Trust) are invaluable, since many of the sources Filangieri compiled were destroyed in the bombing of the Neapolitan archives in 1943.

Screen Shot 2015-07-20 at 5.22.50 PM
Sample page from Gaetano Filangieri’s Documenti per la storia, le arti e le industrie delle provincie napoletane (v. 5).

Rather than continuing to bemoan the lack of digital source material in my field, I’ve decided to focus my efforts on building a structured data set from this one critical source that is available. My project will focus specifically on Filangieri’s volumes 5 & 6, which provide chronological lists of archival material (often including the artist’s birthplace, trade, date and description of the commission, patron’s name, and the amount an artist was paid). Within those volumes, I’m focusing on records from the dates of King Ferrante’s reign, 1458-1494.

So, what do I hope to accomplish with this data?

I’m interested in visualizing, for example, the most active patrons and artists in the kingdom, and the networks among them. I want to examine the geographic origins of artists working in the Kingdom of Naples and to consider different patrons’ preferences for hiring artists from particular regions. There are many other questions relative to artists’ origins, trades, and earnings that I could study through this data set, too. These are quite elementary art historical questions, but because scholarship on Neapolitan art of the early modern period lags behind that of other Italian cities, these basic questions still require attention, and I think data visualization could be a useful way to address them.

I have only worked through the letters A, B, and part of the C surnames in Filangieri’s publication, and I have already made some interesting realizations. This morning I uploaded a PDF version of my data set to Voyant to examine the geographic origins of the artists working in the Kingdom. I was surprised to find that Cava dei Tirreni appears to have been a major producer of artists in this period.

Screen Shot 2015-07-20 at 1.21.03 PM
Voyant Word Cloud of the origins of artists working in the Kingdom of Naples between 1458-1494.

I then drilled down a bit further in my data, this time using Tableau, to see how the different trades of artists coming out of Cava dei Tirreni compared to those from Naples. I found that Naples produced many painters, silversmiths, goldsmiths, glassmakers, tailors, and sculptors, while Cava dei Tirreni specialized in stonemasons, architects, and builders.

Screen Shot 2015-07-20 at 5.47.03 PM
Tableau Bubble Plot of common trades in Naples and Cava dei Tirreni.

Of course, this data set is skewed because a couple of large families of builders from Cava dei Tirreni have surnames in the early part of the alphabet. Although I can’t make any conclusions until I have visualized the entire data set, I’m already getting excited about the power of tools like Tableau and Palladio to macroscopically examine characteristics of fifteenth-century Neapolitan art. The challenge for me at this early stage (in addition to building a tidy, error-free data set), is deciding which tools and types of visualizations are best suited to the questions I’m asking.

Source: Wading Into Data

Scalar and Omeka: First Impressions

After spending some time getting acquainted with Omeka and Scalar this afternoon, I’ve been thinking through how one might decide between the two platforms for a given project. A hunch that I have is that projects could use either platform, but that the ways that the interfaces operate will ultimately lead them to have distinct internal architectures, and similarly, will look differently, despite having similar content. After uploading a few test pages and items, it feels very much as if Omeka’s interface emphasizes individual items and options for their arrangement and display, whereas Scalar’s strength lies in the flexibility to create a context in which those same items may be related to each other in a variety of ways.

In terms the way my mind works, I feel most comfortable building a context and inserting things into it. However, in considering my mapping mini-project, it feels more intuitive to think of locations on the map as distinct items, which would indicate that an Omeka exhibition that uses the geolocation plugin would be one potential route for creating the map. I am reluctant to go “all in” however, in part because I dislike the aesthetic quality of google maps, and want to be able to do more than simply drop a few pins onto it. With regard to Scalar, it is still somewhat unclear what possibilities the interface might offer for this type of project.

Before developing my own project plan, it feels important to further articulate the unique attributes of each system and the form created by their unique interfaces. As a result, I’m thinking about building two sample projects in Scalar and Omeka using the same (small) data set as a way to further understand their nuances. Stay tuned!

Source: Scalar and Omeka: First Impressions

Day Three

Earlier today, during the third meeting of Building a Digital Portfolio, many of us used Omeka for the first time. It is exciting to see the possibilities for finished projects, including the examples they showcase on their website, and to think about what I can do myself with the many add-ons. This could be a helpful way to organize my own research collection, share it with others, and also maintain data for potential future analysis. Good thing my previous experiences as a low-paid intern has prepared me for the tedious task of prepping a CSV import.

Day 3: Audience, Expectations and Limitations

We all acknowledge the wisdom of planning, though often retrospectively. Moving beyond class papers, we, as art historians and professionals, have to be very conscious of our audience, who have expectations that need to be met. They approach our work not usually from the perspective of specialist but more often as an interested generalist. What do we want the audience to get from engaging with our work? This is seems to be the first question answered when conceiving any project, but especially for a digital one.

This question is even more complex for museums and cultural institutions. They have multiple audiences with drastically varying expectations. They cannot ignore the first-time museum visitor who has only experienced art through representations. Or the academic who has spent the last ten year studying an artist whose work is only on display above a door on the third floor (true story). Physical space is a constraint that is hard to overcome, but virtual space also needs to be managed so it works effectively to both draw in visitors and provide resources. Web projects are also just one part of a larger strategy, one that includes exhibitions, various types of programming and publications.

In looking at multiple art-related website, we had the opportunity to consider different approaches. However, there seems to be a need to acknowledge that while nothing is perfect, we can still applaud the efforts and the move towards greater transparency and accessibility of art.  If we know the goal of our digital effort and its audience, we then have a criteria for evaluating which tools, platforms, interfaces, etc best suit our purposes, be it omeka or scalar. These tools seem to offer up a limitless potential, and seem (maybe deceptively) ease to use. Nonetheless, I’m still thinking through how I can make good use of them. But they do make me more committed to cleaning up my metadata lists.

 

 

 

Source: Day 3: Audience, Expectations and Limitations

Can Online Experience and Formal Analysis be Friends?

Designing and assessing a visual online interface is hard, even for people whose vocation is based in visual analysis. As art historians, we learn to think critically at art and eventually, inevitably, we begin to experience the world through a critical-analysis lens. The problem we might run into with Interfaces–on websites, apps, museum guides–is that their quality is not determined solely based only the sum of their formal components. As with (most or some) art, these formal components are vehicles for meaning and message. The education we receive in art history through graduate coursework, independent research, attending conferences, going to look at art, etc. establishes a set of meanings and messages that we learn to look for in works of art. Because we understand something about the context of an object and the canon of art history, we can more easily begin to think about the meaning and message of an object. We generally don’t receive the same education in interface design so our formal and critical analysis skills only get us so far in understanding what an interface’s message and meaning is. Each element of an interface is designed to promote a specific behavior, if the element impedes the user from enacting that behavior the interface has some problems. A website can be beautiful, but if it doesn’t get a user to do the “right” behavior, it’s not succeeding in its goal. Not to mention the fact that the entire notion of guiding a user toward goal specified by the website creator quickly becomes problematic for arts institutions and those of us concerned with democratizing access to content.

What are we doing here? (At the Walker Art Center's website.)

What are we doing here? (At the Walker Art Center’s website.)

In thinking about creating online collections, putting written content online, and creating interactive digital projects to accompany my research, my main struggle is imagining that the user will feel taken care of by whatever digital environment houses my content. I frequently struggle on websites associated with arts and education institutions and in the rest of my life actively avoid apps and websites with terribly clumsy interfaces. I just don’t want to make something that I wouldn’t want to use and am not sure yet how to avoid this plight. Grants? Collaboration? Optimism?

Source: Can Online Experience and Formal Analysis be Friends?

Review of Visualizing Schneemann

I’ve been assigned a review of Michelle Moravec’s Visualizing Schneemann project I saw presented live in 2014, at THATCamp CAA. Very neat to look at it again with a sharper DH lens. Here are my thoughts:

Applicability: Is it directed at a clear audience? Will it serve the needs of that audience?

The audience is other DH scholars, advanced ones too who know and understand the tools used and can follow along quickly. Many areas of new research are identified and teased out, likely with applications for other researchers.

Quality: Is the scholarship sound and current? What is the interpretation or point of view?

Although Moravec admits missing data, she posits that her conclusions would likely be the same: namely, that Schneemann’s network was dominated by strong ties to a small circle of male friends. Her scholarship is not verifiable.

Accessibility: Is there a fee for use? is specific software required?

Access to the project description is free, but no access is granted to the data behind the research (although screenshots of data visualizations etc., are available).

User Experience: Easy to navigate? Does it function effectively? Does it have a clear, effective, and original design?

The text (with screenshots) is a narrative read from top to bottom, general to specific, following methodologies that answer specific questions; one can’t get lost, although one can’t get too close either.

Use of New Media: Does it make effective use of new media and new technology? Does it do something that could not be done in other media—print, exhibition, film?

A wide array of tools have been used to complete this project. The visualizations are somewhat effective, but I required her narrative in order to follow along; had she presented the screenshots alone I would not have been able to. In this way her scholarship reads in a traditional format.

Referencing Mitchell Whitelaw’s essay “Generous Interfaces for Digital Cultural Collections,” I was surprised that at no time is the audience offered (at minimum) a screenshot of an original Schneemann letter. It seems to me that Moravec has lost slightly her hold on the object that has permitted her study of Schneemann’s network.

Source: Review of Visualizing Schneemann

Day 1 Joining a (Virtual) Community

The first day of Building a Digital Portfolio started with a disciplinary existential crisis. With our diverse backgrounds, the discussion on disciplinary approaches quickly morphed into an effort to comprehensively define the field of art history and object-based inquires. The exercise served as a premonition  of what the next two weeks will have in store for us: a nuanced interrogation and re-thinking of not only our research questions but also the ways in which we investigate them.

The first steps into the world of digital art history? Embracing social media platforms as a vehicle for participating within virtual scholarly discussions. These platforms facilitate dialogue as well as provide an opportunity for individuals to craft their own professional identity. Now is there any cure for twitter stage fright?

 

 

 

 

Day 2 Getting Organized

Day two focused on the basic tools that allow us to do digital humanities– without losing our data– and called attention to our methods of collating and storing the foundation of any art historical study: images and references.

The discussion highlighted the need to preserve our digital images by paying attention to archival standards. But the biggest secret to proper image management is metadata. By carefully labeling our images we are able then to ensure they remain labeled and create the schema that allows us to use other platforms and interfaces. However, there are (some) tools in case you your image lacks metadata or other identifiers, such as google images and TinEye. I know I’ll be spending the weekend taking care of my “bad data hygiene.”

We all begin our research on the web, often starting with a simple google search. These results can be further fine-tuned by using the advanced search settings or using its specialized sites (google images and google scholar). However, how do we keep track of all our searches and results? While I’ve been using zotero since 2008 at least,  I always forget that it is more than a simple bibliographic tool. It can also take snapshots of webpages, a safeguard against the ephemerality of the internet.

As our research notes and material becomes increasingly digital, organization is imperative. It especially allows us to take advantage of the wealth of online collections without drowning in the material. However, the availability of images and scholarly material online mandates a better understanding of copyright and usage rules. With our tools neatly in order, we can begin to consider engaging with and building digital projects.

photo: Cleaning a book. Painted by Iwasa Matabei. Ink and color on paper. Japan, Edo Period. Freer Gallery of Art, F1969.15

Source: Day 2 Getting Organized

Crowd-sleuthing with the Art Detective.

It seems a very basic piece of common sense, but to crowd-source, you actually need a fairly considerable crowd. It’s almost impossible to imagine something like The Art Detective taking shape anywhere but the web. As a museum exhibition it would prompt primarily questions, with few answers- as a film or a multimedia presentation it would meet its audience in limited clusters, one at a time. Online it encounters millions of curious folk, with all the diversity of backgrounds, approaches, and knowledge sets that such numbers suggest. And its success has been facilitated by that very crowd.

Let’s back up. What is Art Detective? Art Detective is a project of the UK Public Catalogue Foundation, supported by the Arts Council England. In their own words:

Art Detective aims to improve knowledge of the UK’s public art collection. It is an award-winning, free-to-use online network that connects public art collections with members of the public and providers of specialist knowledge.

art detective frontpage

The problem of many collections- the problem of data, attribution, origin- is encapsulated in identifiers like “Anonymous Flemish Artist,” or “Unknown Italian Artist, 1650-1700.” For many works, we simply don’t have the paper trail to say with certainty where they came from and who their makers were. Art Detective seeks to change this, not with traditional in-depth provenance research performed by an individual (or a small team), but with crowd-sourced sleuthing. Their call for “specialist knowledge” emphasizes a need for sourcing and authority when making claims/offering data, but the range of that specialist knowledge is vastly diverse, ranging from expertise (or personal connections) in genealogy and family research, botany and horticulture, mapping, military history, and more.

We welcome information such as clues as to whom unidentified sitters may be, artist attributions, an execution date of a painting, or the subject matter. Whether you spot a mistake, or have an interesting story to tell, all submissions are welcome. – Art Detective FAQ

Opening the doors has clearly made an impact: the section of their site marked “Discoveries” lists triumphs of identification, attribution, and storytelling. Paintings have been freshly attributed, locations and subjects have been identified, and the body of knowledge associated with these works has been considerably (and usefully) expanded. The site’s designers obviously worked hard to make the experience of contributing knowledge as simple and intuitive as possible- the links at top direct you to “Discussions,” where you can see investigations in progress and add your own contributions, while a section titled “Resources” offers help for paintings evaluation and research queries for small collections managers and interested amateurs. So: not only does the website gather research, it encourages and inspires research. And does it stylishly within an interface I found easy to navigate. Powerful stuff. Museums and collections, take note: your audiences may contain some of the expertise you desperately need to close gaps.

Source: Crowd-sleuthing with the Art Detective.

Building a Digital Portfolio: Project Planning

blog-image copy

Sign of Neon. Noah Purifoy. Found material. 1966.

The materials that I selected to develop this project are comprised of images, ephemera, and a map of damage incurred during the uprising. These items and the proposed map will correspond to chapter one of my dissertation, which examines Noah Purifoy’s Signs of Neon sculptures that were created from the drippings of melted neon signs that had been destroyed during the Watts Uprising in 1965. After decades of spatial, political, and economic disenfranchisement, the uprising acted as a catalyst for the improvisatory reimagining of urban space. As works created from the wreckage of the uprising, the sculptures communicate the significance of improvisation as regenerative strategy in Watts, which is further indicated by the the number of cultural centers established in buildings destroyed during the unrest.

In Watts, the linkage of improvisation and the reshaping of urban space extends back to the 1930s and 1940s, when Watts and South Central Los Angeles were home to numerous jazz musicians and clubs. Despite discriminatory laws that sought to segregate different ethnic groups, white Angelenos frequently travelled to South Central and Watts to attend jazz shows. Although police frequently raided these performances, they mark an important moment where the space of the city was reconfigured and the legal speech upholding discrimination was broken by the impromptu integration of black and white Angelenos in their attendance of jazz performances.

At this stage of its development, it is hard for me to know what questions this map will answer that are different from those contained in the written chapter, as well as what their relationship is to each other. What I know with certainty is that the written chapter cannot adequately address or describe these changes while performing the task of analyzing the artworks within the theoretical frame that I have selected. A dynamic (perhaps interactive?) map can offer a sense of the transformation of the built environment over time and demonstrate the changes occurring across multiple sites within Watts in a manner that is more compelling and easily understandable than the same data presented in narrative form.

Source: Building a Digital Portfolio: Project Planning