Planning your evaluation
Museums are increasingly being asked to prove our value to society in robust ways. In the University of Cambridge Museums, we believe that evaluation is strongest and at its most effective when we use the most appropriate methods for both the audience and output. Some of our evaluation and audience development work continues throughout the year, sometimes we do more intensive evaluation. We use proportionate methods, which are based on what the project is, the timescale and timeline, audiences (both existing and potential), the budget, measures of success, and the change the evaluation can bring. Through all our work, we want to make sure museums and galleries are still spaces to be enjoyed, we don’t want our visitors to feel that they are being studied. We usually carry out evaluation at different times in the life of a project:
Front end: this is the evaluation that happens before a project starts, perhaps testing what an audience knows about a topic, their attitude towards something or getting baseline data. It might take place before an audience is finalised, when we are working out who the activity should be aimed at.
Formative: this is ongoing testing and evaluation that takes place before something is finished. Perhaps we have had the initial idea and we want to do some user testing before we decide on the final version, for example, testing exhibition interactives.
Summative: this evaluation takes place at the end of the work, when nothing can be changed, but when you are trying to see what the impact of the final product has been on the intended audiences, whether the work met the original aims and objectives. This is the most common method of evaluation in museums.
The tools and methods you use will change depending on the audiences you are working with and what you are evaluating. You will probably want to choose two or more methods to evaluate the activity. This is called triangulation and should improve the validity of claims that you draw from your evaluation data. At the end of this resource is additional information about different evaluation methods, looking at their advantages, disadvantages, time needed, inputs and outputs. Here are some suggestions of different ways to evaluate either staff or participants for various museum activities:
Once you have decided on the methods that you are going to use, you will need to plan out your timeline. Below is an example timeline for an exhibition where the evaluator is planning to use surveys and visitor observation with members of the public:
Further reading and other resources
- Judy Diamond, Practical Evaluation Guide: tools for museums and other informal educational settings, AltaMira Press, 1999.
If you are looking for an easy-to-read, museum-relevant guide to evaluation, this is an excellent introduction. Judy covers planning, selecting people for the evaluation, observation, interviews, questionnaires, presenting and analysing data and writing the report. She uses museum examples throughout, mainly from science collections. There is further recommended reading section at the end. The book is showing its age a little, but it is still an excellent starting point for a simple overview.
- Barry Lord, Gail Dexter Lord, Maria Piacente The Manual of Museum Exhibitions, AltaMira Press, 2002.
There are a few versions of this available (cheaper copies of the earlier versions are available second hand online), but the chapter on evaluation with sections by Duncan Grewcock and Barbara Soren is well-worth a read to gain a helpful overview of exhibition evaluation.
- National Co-ordinating Centre for Public Engagement (NCCPE) website, especially the guide to ‘Using a logic model to develop your strategy’: https://www.publicengagement.ac.uk/resources/guide/using-logic-model-dev...
To plan your evaluation, you might decide to use a logic model. Some funders are now asking for these as part of your application. If you are new to planning with logic models, this guide by Mary-Clare Hallsworth will walk you through the process step-by step. There is a worked example, which although not museum-relevant is broad enough to still be useful.
- Case studies on the websites of The British Museum, The Natural History Museum, and University of Cambridge Museums.
One of the most useful ways to learn about evaluation is to examine case studies from other museums to see what works, what you like and what you think it not so useful. The British Museum has a ‘visitor research and evaluation’ section on their website, with mostly summative exhibition evaluation reports, usually written by consultants. The Natural History Museum’s website has an ‘audience research and insight’ page, with downloadable pdfs of literature reviews and some more specific reports on activities and events. Some of these are over ten years old, and at the moment the page doesn’t appear to be updated regularly. The UCM Collections in Action website includes a section on resources, where evaluation support can be found. The blog also is regularly updated with evaluation reports.
- Ben Gammon and Jo Graham, ‘Putting value back into evaluation’, Visitor Studies Today! Volume 1, 1998, pp 6-8.
This short article looks the barriers that organisations (in this case, The Science Museum) face around embedding evaluation into what they do. The authors provide some practical ideas of things that audience advocates can do to bring all museum staff on the evaluation journey. A thought-provoking read.
Suggested Evaluation Methods
Download the Planning your evaluation resources
Next page: Surveys and Asking Questions
Author: Dr Sarah-Jane Harknett.
Updated: February 2026.