Tag

cultural data project

Don’t Just Show Me The Money – The Value of Art as Experience

By | Art & Social Change, Art That Counts | 10 Comments

In a single visual, this is pretty much everything that’s wrong to me about how we talk about about the impact of art and arts organizations. Granted, I myself have highlighted efforts that quantify the impact of art in this way, mainly because it so dominates the research of what makes art powerful and, in the eyes of funders, worthy.

But I’ve also written a lot in this space about finding a better way.

Economic impact is pretty low-hanging fruit in terms of data related to arts and impact. Money and jobs are easily quantifiable and pretty clearly Good Things the arts should line up to take credit for. But is it why we make art? Why we subscribe to theater seasons, attend art museums or listen to music? As Ben Cameron of the Doris Duke Charitable Foundation asks in the foreward to Counting New Beans: Intrinsic Impact and the Value of Art:

…do artists really create work to leverage additional dollars for the local economy? Do audiences really go to the theater to drive local SAT scores higher?

It’s obvious that we do not, and reports like Counting New Beans do the hard work of establishing why people seek out arts experiences and what they gain by doing so. The study summarized in Counting New Beans looked at 18 theaters in 6 regions and, instead of focusing on their work’s extrinsic values—like the economic indicators above—established categories of intrinsic value and researched those.

This distinction between extrinsic and intrinsic isn’t new; Gifts of the Muse (PDF full report) attempted to provide what Ian David Moss calls a “grand unifying theory” of the benefit of arts back in 2004, exploring instrumental/extrinsic and intrinsic benefits as they had been discussed and researched to date. The research outlined in Counting New Beans builds upon this, quantifying audience experience in areas like:

  • Anticipation (“How much were you looking forward to this performance?”)
  • Captivation (“How absorbed were you…?”)
  • Post-performance Engagement (“Did you leave the performance with questions you would have liked to have asked the actors, director or playwright?”)

These are intrinsic values, the transformative emotional, social and intellectual experiences that result when we view art. They’re hard to get at and quantify because audience members aren’t always able to articulate their experience (i.e., great art may render someone speechless, which is an amazing feat as an artist and a contraindicated one as a researcher).

These values can be difficult to summarize in a single measure of impact, unlike the dollar signs above. Some works are connected to emotionally vs. intellectually, some works are calls to action, others result in a sense of familiarity or connection. As a result, some of the survey tools used produce both qualitative data about how audience members were impacted (e.g., “How did you feel after this performance?”) and quantitative data about the degree of impact ( e.g., weighing the emotional impact of a performance on a scale of 1-5). The resulting qualitative data lets arts organizations know if the audience left feeling sad or hopeful, while the quantitative data establishes how deeply the work made the audience feel or empathize (regardless of the exact feeling).

While Counting New Beans focused on live theater performances (and my examples above followed suit), the study of intrinsic impact isn’t limited to theaters. A multidisciplinary study in Liverpool included theaters, museums, an orchestra, and other arts groups. I’d be grateful to hear from any local groups using intrinsic values data, either in describing their work or in assessing grantees. As I try to argue above, I think it’s a stronger depiction of the benefits of arts in our lives and provides arts organizations clearer and more actionable feedback than simple economic indicators.

The Power of Story

By | Art & Social Change, Art That Counts | No Comments

I was heavily involved and invested in museums for the first decade of my career — as a staff member, a fellow, an intern, a volunteer and a museum studies student. So it was a delight to attend the annual meeting of the American Alliance of Museums in Baltimore this week, greeting the people in the field that I follow avidly via Twitter and blogs and the icons of the museum world to the city of which I’m such a fan.

AAM Schedule (photo by Michelle Gomez)
Photograph of AAM program/schedule courtesy of Michelle Gomez and via Instagram.

The theme of this year’s conference was “The Power of Story.” And while that might not seem that relevant to data and evaluation on first glance, it’s data that gives power to our stories. Inside museums, evaluation and measurement are done in some ways that might be familiar to the casual visitor (e.g., visitor surveys, comment cards, program evaluations), but also some that might be unexpected or go unnoticed, as a profile from the Wall Street Journal illustrates:

Matt Sikora doesn’t look at the Rembrandts and Rodins at the Detroit Institute of Arts. His eyes are trained on the people looking at them. Mr. Sikora watches where visitors stop, whether they talk or read, how much time they spend. He records his observations in a handheld computer, often viewing his subjects through the display cases or tiptoeing behind them to stay out of their line of sight. “Teenage daughter was with, but did not interact, sat on bench, then left,” read his notes of one visit.

It’s not uncommon for museum evaluators to shadow visitors in the galleries, learning from their movements what areas or objects are engaging and for how long. In addition, before an exhibition opens to the general public, many elements, including label text and interactive gallery displays, are prototyped and tested. Through these evaluations, exhibit designers, curators and museum educators learn more about visitors’ reactions to exhibits: which elements are engaging, confusing or overlooked. In addition, some evaluation tools also provide information about what visitors take away from their time in the gallery — what was learned, what inspired them, what connections they made and, hopefully, what will draw them back again.

What was so empowering about this year’s conference was being able to evaluate those tools themselves, and to learn. Surprisingly, technology is not always the answer. Visitor evaluation consultants and staff members from the Brooklyn Museum and Monticello shared various scenarios where their attempts to survey visitors went awry because technology got in the way or skewed results, the target audience was elusive or just straight-out avoided their polling attempts. It just goes to show that even bad data can teach you something, even if it’s not what we set out to learn!

Even more surprising was the lesson that data doesn’t necessarily persuade, no matter how clear or comprehensive. Often, beliefs trump facts. As Stephen Bitgood, Professor Emeritus of Psychology at Jacksonville State University and Founder of the Visitor Studies Association, said, “When strong belief is pitted against reason and fact, belief triumphs over reason and fact every time.” Despite our expectation that data should persuade, prove and set people on the right course, it simply doesn’t override gut instinct, what people feel or believe to be true. Again and again, presenters told tales of data being met with questions or disbelief. Unfortunately, no solutions were presented to either circumvent or resolve this issue, but I am filing this under “knowing is half the battle” and keeping it in mind when data is presented as all-powerful or all-knowing.

Display at AAM2013 (photo by Mariel Smith)Display at AAM2013 (photo by Lindsay Smilow)
Photographs of AAM display, top to bottom, courtesy of Mariel Smith via Instagram
and Lindsay Smilow via Instagram.

So evaluation and measurement can fail or go awry. Testing our tools and techniques in small batches prior to rolling out the full survey or other strategy gives us an opportunity to see it in action and identify areas to fix or improve. If evaluation and measurement are treated as afterthoughts, as so often is the case, these tests are even less likely to occur and, as a result, the final data may prove useless, further cementing the idea that evaluation itself is a useless activity. It’s a difficult cycle to break out of, but worth identifying and tackling so that we can truly tell a more powerful story.

HIPerwall Demo: Cultural Analytics by Flickr user guategringo

Big Data Meets Art

By | Art & Social Change, Art That Counts | 2 Comments

Big data isn’t something that’s just being covered breathlessly by the likes of Forbes and Fast Company; arts and culture organizations and nonprofits are generating, collecting and sifting through their own data and collaborating to make sense of it all. Initiatives like the Cultural Data Project (CDP) and the National Arts Index have been collecting and sharing data since 2004 and 1998 respectively (check out Baltimore’s Local Arts Index).

The CDP is an online tool which allows arts and cultural organizations to report, review and analyze organizational, programmatic and financial data. Originally developed through a collaboration of Pennsylvania funders, the project expanded to other states beginning in 2007 with Maryland. It now includes 12 states and the District of Columbia. Locally, Maryland State Arts Council is a member of the Maryland CDP Task Force and requires many grantseekers to complete a CDP organizational profile. More than 14,000 arts and cultural organizations have completed a profile, including 447 Maryland organizations (as of December 1st, 2012).

Screenshot from 2013-03-13 19:35:58

This data collection process results in reliable longitudinal data that is useful to researchers and advocates, as well as grant makers and the participant organizations. Participants can run and download reports that compare their activity from year to year, as well as comparisons against data aggregated by other participating organizations on the basis of organization type, geography and budget size.

While it has been run and organized by The Pew Charitable Trusts for the past eight years, the project is currently in transition and will begin operating as an independent nonprofit as of April 1st, 2013. In addition, it announced a collaboration with the arts and business schools at Southern Methodist University (SMU) and other partners to create a National Center for Arts Research (NCAR) at SMU. Together, these organizations look to be a nationwide resource on arts attendance and patronage, the impact of the arts in our communities and the financial trends and health of arts nonprofits. This new center will build upon the comparison reports currently available via the CDP:

NCAR will maintain a website with an interactive “dashboard,” created in partnership with IBM, which will be accessible to arts organizations nationwide. Arts leaders will be able to enter information about their organizations and see how they compare to the highest performance standards for similar organizations in areas such as community engagement, earned and contributed revenue, and balance sheet health.

The current shortcomings and the future potential of the CDP have been outlined in a great article by Talia Gibas and Amanda Keil. Issues such as this were much on my mind as I attended the Greater Baltimore Cultural Alliance (GBCA)’s gathering of cultural data collectors. While the original invite and some of the presentations focused on mapping data, a broader conversation also took place about the challenges local arts nonprofits face when collecting and analyzing data. I was actually delighted that representatives of the Baltimore tech community (Sharon Paley of the Greater Baltimore Technology Council and Kate Bladow, coordinator of the Tech and Social Change meetup) were attending and a partnership with GBTC has resulted.

There was some discussion that an ongoing group would meet around these issues and, should that come to fruition, I look forward to the opportunity to participate further and meet more individuals involved in the arts and nonprofits who are looking for data-driven answers about the impact of their work. One of my major takeaways from this session, however, was that my consideration of data shouldn’t be limited to metrics of impact, but also the power of data to describe our community. I look forward to highlighting some of this work already ongoing in Baltimore in future columns.