Back to Creative Practice

5

Creative practice
Observatory

How to think about /do evaluation?

A practical reflection on generative evaluation

Not all evaluation dances to other people's agendas. While much of the evaluation work in the CreaTures project was led and analysed by researchers in discussion with creative practitioners to meet the requirements of policymakers and funders, some was intended to change that dynamic.

Equally important are tools that enable practitioners to judge if their designs are successful on their own terms and then share this. To be useful, such evaluation needs to be light-weight, unobtrusive and non-repetitive: it has to work alongside tight schedules and limited funding; it has to allow for experimentation as well as production. But its design can also show rigour and allow goals of transformation to be assessed on their own merits.

I think of the 'before' and 'after' of transformation-oriented creative practice as like the Copenhagen train map. Lots of trains come in from diverse places and go out to lots of other destinations; however, there is a point (near Østerport) where almost all the trains go on a particular section of the track – so it is with the processes of working to make cultural change. We don’t know in advance what influences and concerns people will bring, and the focus of the practitioner will vary (from alternative economies, to improving biodiversity, to post-fossil futures and beyond). We don't know what meanings will be made or where the combination of a group of people interested in these topics will travel in their thinking; but, in the end, there are only certain tracks to follow for cultural change (rather than innovative expression for the sake of it). These involve making connections between the global and the local, between what we hope for and what we do, and so on.

Not every creative practitioner is seeking to make transformation. Here, using learning from CreaTures, we are concerned with transformative futures stimulated through creative practice. In this case, a fundamental tenet is that some change has taken place. In other parts of our work, we talk about what change we see happening and the paths used to make it; here we talk about how to know it is being made.

The best generative evaluation practices do not feel like judgment; they are woven into other work and become part of the experience. They offer a practical step that allows some before-and-after reflection, but it feels like a natural part of the experience. The Arts Council England offers advice for how to conduct ‘thoughtful evaluation’ (see below), suggesting organisations ask themselves first about the ‘aims of the work’ and ‘the mission behind your organisation’. Their focus is designing a good survey – one that allows for measuring success. However, surveys are not always easy to incorporate into practice; they aren't much fun and participants may not complete them. How can we gather information on transformation in a more lively, reflective fashion? One way that is more attuned to both practitioners' aims and the participants' experiences is to make the evaluation part of the event, not an addition to it.  

To give an example, the 2007-08 Democratising Technology (DemTech) project engaged older people in imagining novel technology. The Geezers — a self-named group of participants — supported by the artist Lorraine Leeson undertook a 10+ year initiative to build a novel water turbine design and tackle environmental concerns through diversifying renewables research. The turbine was launched on the River Thames in 2013 and it was followed by a floating, pollution-busting waterwheel on the Lea River in 2017. In 2022, the collection of projects won a Time Higher Education Knowledge Exchange award.  

DemTech’s potential for making transformation was based on equipping groups with several different elements that contribute to transforming attitudes and sense of self: 

  • Forum – a space to contribute and people to listen 
  • Motivation – the desire to contribute 
  • Articulacy – the vocabulary and fluency to present one’s ideas in a particular domain 
  • Confidence – the assurance to become involved 
  • Knowledge – enough understanding to have an opinion 
  • [Sense of] Agency – an awareness that change is possible and of oneself as an agent of change 
  • Association – the ability to interpret things together or see links, such as: old and new, people and things, etc. (Light et al 2009). 

The evaluation during the project had many dimensions. We looked at the language used in exchanges with participants, how it changed over the life of the project and what such changes indicated.

We compared the impact of four artists with different priorities across Demtech. While all were interested in social engagement and cultural change, some brought their own concerns to their group and others were more interested in what they group wanted to explore.

We followed the journeys of these different groups and noted when interest in Demtech fizzled out, which is how we learnt of the extraordinary initiative of Loraine and The Geezers, to 2022 and the award for impact. Clearly, this and the other processes mentioned above were labour-intensive and costly.

But one exercise we did at the time as simple and quick, but, because it was designed effectively for the task at hand, it gave us great material about how people had changed through their involvement with us.

Participants were given a simple timeline stretching back to their youth and reaching into an open future, with a marker for the present. They were asked to put significant things and events that they remembered from their past along the timeline. They were then asked to put potential future things and events into the future space. After participation in a series of activities, they were given back their timelines and asked to add other things and events into the future section, reflecting on what expectations for the future the work had given rise to.  

The quality of these later contributions was completely different from those that they had started with. Not only were the participants demonstrating greater insight into the range of things to discuss, but there was more depth to the contributions and greater relevance in them to the global discussions taking place at that time. Single word answers had become mini-essays. Even the commitment with which they undertook the follow-up task was noticeable, with more confidence, energy and enthusiasm.

One of the key things about the timeline task is that it only provides structure for answers, but not what they need to contain – it is non-directive. Unlike a question that might shape what follows, offering an exercise this free of content can show their interest, ideas and commitment without ‘leading’ the answers they give. This makes it possible to prove that the experience shaped the transformation. Think of the difference between a) giving people a timeline; b) asking people what they think are important issues in the future; c) asking if they think that something particular will be important in the future (such as the focus of the work); or d) asking whether they are worried about this in the future. Each has a different orientation and some of these questions will be more determining of people’s next thoughts and experiences than others. The determining quality builds from a) to d), getting less open and more directive as it goes.

Questions that tend towards ‘are you thinking differently or feeling different as a result of participating?’ only go so far, as people do not always notice their  thoughts or feelings have changed. It is often worth asking anyway, but the value of before-and-after exercises is that they reveal to you if someone is changed by your intervention even if participants have not picked up on it.

Non-directive reflective practices – seeding without leading– require structures for evaluation that avoid introducing too much of the practitioner’s own thinking into the process. Asking a question may, by itself, change something in participants’ minds - it tends to set the agenda for everything else we might want to know. An open style is good for collecting responses in a before-and-after context where people’s second answers can be compared with their earlier response. More focused directive questions work better for evaluation conducted only after an event has taken place.

More recently, CreaTures researchers have been working with the arts organisation Furtherfield. Here the focus has been on analysing Furtherfield’s long-term project for Finsbury Park in London, on the future of open and green spaces. ‘The Treaty of Finsbury Park 2025’ invites local people to participate in setting up an interspecies celebration of life in the park. After a sorting activity to give people a role to play, the next hours are spent acting as one of several species that inhabit the park.

While being briefed on their role, participants are also asked for a little information on their attitudes. This starting point for evaluation by the hosts is not distinct from the other preparation work, but it does not contribute to the game; we introduced it so that debriefing later would show how the participants thoughts had moved on.  

That means we also had to hear from participants at the end of the game to find out what had changed in their thinking. That debrief is important. It reveals whether feelings and thoughts have changed. But it also does some work in making the experience transformative.

Reflection embeds experience. It gives feedback to the hosts about the experience and what could be developed more.But if done collectively (and sensitively, for opening up requires safety and a sense of trust), we can reflect together on:

  • Other people’s views and how they might differ from or complement our own impressions
  • Our own thought processes and what we value
  • How this relates to our worlds and what is salient to takeaway/act on
  • What it might mean/require in terms of change
  • Who we might want to collaborate with – become friends and allies with

Mainstream monitoring and evaluation approaches do not serve the types of change processes discussed here. As we explore  creative practice as a medium of transformative change, we might ask:  

  • What is judged as evidence? What is success? 
  • Over what time frame and range of stakeholders can/should success be judged?How can evaluation value risk-taking and related implications for creative practice? 
    How do we account for emergence?  
  • How can evidence be gathered of the impacts of creative cultural interventions without reducing context-specific, often highly personal and difficult-to-capture qualitative outcomes to product- or outcome-focused analysis?
  • How can evidence be expressed so it supports the practitioner’s practice, the learning of other participants, and other professional groups’ understandings of its value (for instance, policymakers or climate scientists)? 

Doing this helps creative practitioners to support claims of impact with respect to other professions and disciplines. This is not to instrumentalise art, design and other creative work, but to formalise some processes for understanding what that work might mean on its own terms. Methods like play and workshopping do not lead to specified and quantifiable outcomes in the way that simple structures do, so we need good processes that are relevant to our choices of approach and the questions we might want answered. These are generative evaluation practices: generative because they are not an external judgment at the end of a process, but a part of the engagement, iteration and understanding that makes up the reflective work of creating socially-engaged practices.

References

Light, A., Simpson, G. and Weaver, L., Healey,P.G. (2009) “Geezers, Turbines, Fantasy Personas: Making the Everyday into theFuture” Proc. Creativity and Cognition 2009, Berkeley, Oct 2009

Light,A. (2011) Democratising Technology: Inspiring Transformation with Design,Performance and Props, Proc. CHI 2011, 2239-2242

The Arts Council’s pages onEvaluation for Artists are here: https://impactandinsight.co.uk/docs/getting-started-with-thoughtful-evaluation/

And interesting contrast can befound at the Centre for Cultural Value: https://www.culturehive.co.uk/research-and-evaluation-practice/

And also on the pages of the HappyMuseum: https://happymuseum.gn.apc.org/resources/measure-what-matters/

Each set of resources handlesevaluation for practitioners in very different ways.