chevron icon Twitter logo Facebook logo LinkedIn logo YouTube logo download icon quote icon posted icon clock icon author icon arrow icon arrow icon plus icon Search icon location icon location icon document icon menu icon plus-alt
measurement 3.jpg

The Motives for Measurement Peer Learning Group is a knowledge sharing initiative that is open to anyone interested in exploring the reasons for measuring social outcomes, and the various means of doing so. In particular, we explore how measurement can be useful for two purposes which can sometimes conflict. The first is to support ongoing learning and adaptation in the pursuit of better outcomes, often across multiple organisations. The second is to support transparent accounts of the success or failure of those efforts. 

The group is run by the GO Lab and Collaborate CIC, a practice organisation who help public services collaborate to tackle complex social challenges. We have gathered together a core community of practitioners from across sectors who are interested in resolving the tension in motives for measurement. We will meet regularly online to hear examples of places that are using measurement in smart ways to support both learning and accountability. Members are invited to propose topics for discussion. 

To join the group and be added to the mailing list to hear about future meetings, please email Nigel Ball (GO Lab) or Dawn Plimmer (Collaborate CIC).

Aims of our group

We hope that the sessions may inspire group members to consider multiple perspectives on the question of how measurement is used for learning and accountability. We hope this might be reflected both in their own practice, and in the advice they give to others.  

Over multiple session in the second half of 2020, the group agreed four 'how might we?' questions: 

  1. How might we collectively describe accountability and its many dimensions - what it is and what elements of it are desirable in a particular setting - so that we can intelligently develop approaches to enable it?
  2. How might we enable the more relational elements of accountability and learning i.e. trust and shared responsibility including with citizens?
  3. How might we improve our understanding of the influence of structural system elements – of professions and standard-setting bodies, and government centralization – on public sector measurement and management practice? How can we use this improved understanding to improve practice?
  4. How might we fruitfully discuss the presence – or absence – of a ‘one best way’ of measuring and managing public bodies? What contingencies or elements do we identify that are helpful to practitioners?

We shall seek to understand in more detail what contrasting approaches actually look like in practice, and why the particular circumstances of a case might lend themselves to the approach being described. We shall discuss which other approaches might fit the circumstances in question. Through this exercise, we hope to tease out to what degree the tensions can be resolved through practical tools, and to what degree they simply represent different ways of seeing the world. 

For more on this topic, these two blogs offer reflections on the group’s early discussions:

Seven principles for a new approach to learning and accountability by Jo Blundell, Visiting Fellow of Practice at the GO Lab

Building shared purpose: rethinking learning and accountability after Covid-19 by Dawn Plimmer, Head of Practice at Collaborate CIC

Upcoming session

Please contact Nigel Ball if you would like to join the next Motives for Measurement session. We will announce the date and theme closer to the session. In the meantime, we invite you to our Social Outcomes Conference, where we will be tackling some of the most pertinent issues around measuring social impact and outcomes.

Past meetings

Session discussed recently published Human, Learning, Systems report.


The report raises important questions regarding the role of 'measurement' in work towards better social outcomes, which the group discussed:

(1) recognising that the primary purpose of measurement should be learning but also exploring what safeguards we need when there is bad performance due to neglect rather than complexity; 

(2) acknowledging the limitations of population level data but seeing it as one of many sources of info that can support (holistic and nuanced) learning and decision making about allocation of scarce resources;

(3) recognising the value of 'bespoke' approaches but also not ignoring the evidence base that does exist about effective practice.

We started the session with Andy Brogan, Founding Fellow at Easier Inc, who as a co-author of the report, talked through these issues from his perspective. We followed this with some reflections from Jacqui McKinlay, CEO of the Centre for Governance and Scrutiny, and then heared from group members in open discussion.

The quickest way to get a sense of what the report is about might be to watch Toby Lowe speaking at the launch event (min 7- to min 27). You might also want to direct your attention to the chapter titled "The impact of Human Learning Systems for people” (p239 of the report).

We explored a tool to measure systems change progress. Lewis Haines from Collaborate presented a Systems Change Maturity Model developed as part of work on evaluating systems change for Save the Children’s Early Learning Communities. He was joined by New Philanthropy Capital, who they partnered with in the work, to reflect on the process of developing the model and their experience of using it to understand the development of systems conditions. You can find more detail in the recent blog here.

We were joined by a team from Essex County Council who develop and support the Essex Partners Board – a multi-agency and countywide group of leaders.  They talked about their journey to establish the partnership and to tackle major systemic challenges through it, including working with the team at GO Lab to develop a system of feedback and learning.

Below are some meaty questions that we tackled in the session:

  • What experience can Essex draw on around building consensus to prioritise action across multi-sector partnerships.  What has driven this consensus and enabled action? 
  • What are the examples where groups of disparate organisations have effectively self-governed in a way that allows all to hold power and responsibility on equal terms? What norms or processes enabled the group to work in this way? 
  • What motivates and enables organisations or leaders to take collective responsibility for additional (and tougher) challenges outside their usual scope of responsibility?

In the first part of the session, we heard from Toby Lowe (Centre for Public Impact) and Deborah Blackman (University of New South Wales, Canberra) who talked about everyone’s favorite topic: ACCOUNTABILITY. Deborah and Toby shared with us their latest thinking on how current conceptions of accountability are broken and what our alternatives might be.

In the second half of the session, Tim Hobbs (Dartington Service Design Lab) spoke on a topic near and dear to our collective heart: integrating different forms of evidence and offering his thoughts on a new paradigm for evidence use in service delivery.

We were joined by Richard Croker (LDP Programme manager, Calderdale Council), Ben Williams (Local Pilot manager, Sport England), Alex Potts (LDP Evaluation lead, Leeds Beckett University) from the Sport England funded Local Delivery Pilot in Calderdale, a systems change initiative that aims to tackle physical inactivity. The pilot focuses its measurement at a system level, exploring how to create the conditions for stakeholders to recognise, value and embed physical activity in what they do. This involves evaluating change at a policy, working practice and delivery level.

Calderdale colleagues shared the rationale and benefits of focusing on systems-level measurement, and how they'd gained senior buy in for this, as well as introducing some of the questions they're grappling with in terms of the impact on and accountability to local people, and how to understand the 'ripple effects' of their work.

•How do we measure the 'indirect' impact of Active Calderdale's work (i.e. the 'ripples')?

•How do we efficiently and economically monitor and measure the economic impact of AC's work?

•How do we monitor/measure if an actor values or continues to value physical activity, without AC's support?

The beginning of the session covered four "how might we?" questions which formed the basis for the group's discussions going forward:

  1. How might we collectively describe accountability and its many dimensions - what it is and what elements of it are desirable in a particular setting - so that we can intelligently develop approaches to enable it?
  2. How might we enable the more relational elements of accountability and learning i.e. trust and shared responsibility including with citizens?
  3. How might we improve our understanding of the influence of structural system elements – of professions and standard-setting bodies, and government centralization – on public sector measurement and management practice? How can we use this improved understanding to improve practice?
  4. How might we fruitfully discuss the presence – or absence – of a ‘one best way’ of measuring and managing public bodies? What contingencies or elements do we identify that are helpful to practitioners?

Following that, we heard a case study from Lela Kogbara, Celestin Okoroji and Victoria Cabral of Black Thrive  and Chris Dayson of Sheffield Hallam University, about Black Thrive's efforts to set up a shared measurement system (SMS). The group discussed four topics:

  1. Making the shared measurement system (SMS) useful
  2. The mechanics of sharing data sets between diverse stakeholders 
  3. Involving the community's voice in measurement
  4. The governance and accountability around shared measurement.

Jo Blundell and Dawn Plimmer led a discussion around the Seven principles for a new approach to learning and accountability that emerged from the April discussions.

Michael Little from Ratio presented the findings from the learning network they have been running with Lloyds exploring the measurement / accountability question.

Mix and match: comparing motives and means for measuring social outcomes. You can read about the topic and speakers on the GO Lab conference website here and you can watch a recording of the session below.

Mix and match: comparing motives and means for measuring social outcomes

The group met for the first time for a discussion aimed at exploring the tension between demonstrating value and effectiveness in using public resources, and learning and adapting to address long-term systemic challenges. You can download a summary of the discussion by clicking below.

Moderators