This peer learning group is no longer active. However, all of our resources and session recordings are still available for you to explore.
The Motives for Measurement Peer Learning Group was a knowledge sharing initiative that was open to anyone interested in exploring the reasons for measuring social outcomes, and the various means of doing so. In particular, we explored how measurement can be useful for two purposes which can sometimes conflict. The first is to support ongoing learning and adaptation in the pursuit of better outcomes, often across multiple organisations. The second is to support transparent accounts of the success or failure of those efforts.
The group was run by the GO Lab and Collaborate CIC, a practice organisation who help public services collaborate to tackle complex social challenges. We gathered together a core community of practitioners from across sectors who are interested in resolving the tension in motives for measurement. We met regularly online to hear examples of places that are using measurement in smart ways to support both learning and accountability. Members were invited to propose topics for discussion.
In mid-2021, we reflected on our nine sessions between April 2020 and July 2021 and agreed on an approach for the group going forward.
The Motives for Measurement Peer Learning Group is not, and has never been, a space for passionate believers in a particular 'new way' to build a movement and promote a cause (though such spaces certainly exist!). Rather, it is a space for believers in many different versions of the future to come together and bring to the surface the real-life tensions that exist between them. We do not expect many people to argue for a maintenance of the status quo, but we do believe there are different ways forward. Here are some of the questions we would like to tackle head-on going forward:
There are no upcoming sessions scheduled at this time. Check back soon.
In the March 2023 session of the Motives for Measurement peer learning group, we explored participatory approaches to decision-making, including deliberative processes. More details are available here.
This session of the Motives for Measurement peer learning group explored how sense making tools can help us to understand diverse insights and evidence in complex systems. Find out more here.
The group has previously acknowledged the need for a variety of information to inform learning and accountability, including deeper, more qualitative insights alongside traditional, numerical indicators. In this session, we explored how both central and local government can begin to build these features into their policy initiatives, and the benefits and challenges of community engagement compared to the usual ways in which government tries to measure social challenges and the success of programmes.
First, we were joined by Dave Growcott (Community Manager at Test Valley Borough Council) to explore their use of community engagement to understand how public services respond to users' needs, and how this might contribute to greater resilience in the longer term. Then we heard from Rachel Salmon (Strategic Delivery Officer at Hackney Council), who shared insights from their approach to community engagement as part of their Partnerships for People and Place (PfPP) project, which aims to help people age well in the Hackney Marshes neighbourhood. Last but not least, Lizzy Hawkins (Assistant Director, People, Places & Communities at DLUHC) offered a central government perspective. She discussed the broader approach they have taken to engaging with communities to develop the 13 PfPP pilots, the rationale for doing so, and the challenges and benefits they've encountered.
At this session, we explored how power imbalances might impact upon social initiatives, and the measurement of them.
Rowan Conway kicked off the session with a provocation, arguing that community empowerment is impossible without addressing a range of structural barriers and inequities, including power asymmetries, administrative burdens, financial exclusion and a lack of representation. Giorgia Sharpe then reflected on her years of experience in community engagement, including as part of the London 2012 Legacy, and considered the challenges to achieving genuine engagement around projects in a way that informs decisions. Finally, Alannah Keogh offered some insights into Social Investment Business' efforts to use data to ensure equity is considered during their funding allocation process. Other members of the group then shared a wide range of insights and reflections on power dynamics in measurement and accountability that they'd encountered.
At this session, we explored the idea of "accountability for learning"
To begin the session, Dawn Plimmer (Collaborate CIC) provided a brief overview of 'accountability for learning', laying out the need to shift the balance of accountability and focus more on learning and adaptation. We then heard from Maria Reader and Chad Oatley from Sport England. Maria shared how they've sought to embed learning in their work, including some of the barriers to doing so, while Chad discussed how they have balanced this with accountability requirements. Finally, Alexandra Palmer (Liverpool City Region Combined Authority) shared their journey of applying 'human learning systems' to a commissioner-provider relationship, noting the value of learning, within a homelessness service context.
At this session, we explored the Supporting Families Programme's new Outcomes Framework.
Michelle Mullarkey, Policy Lead for the Supporting Families Programme at the Department for Levelling Up, Housing and Communities (DLUHC), walked us through the hot-off-the-press new Supporting Families Outcomes Framework. She discussed the broader range of outcomes being addressed and emphasis on building consistent capacity for data and measurement across the country. She also highlighted the support for the programme within central government, which had resulted in a 40% uplift in funding at the most recent Budget, and the value of bringing this funding to bear on pressing challenges facing families.
We then heard from Paula Whitehead from Lincolnshire County Council, who also worked on the development of the Framework during a secondment to DLUHC. She offered a perspective on both the hard-fought achievements of the Framework, and some of the areas where it might still be further refined. In particular, she emphasised that families don't care about a particular intervention they receive, but rather the impact that a combination of factors have on their lives, and so there is a need to move towards measuring the whole system.
Finally, we heard from Kathy Evans, Chief Executive of Children England (the membership body for charities working with children and families in England). Kathy acknowledged the ambition and scope of the new Framework, which she said showed the government had been listening to providers around the wider challenges facing families. However, she raised concerns about the linking of financial incentives to the collection of this data, warning that it may distort the measures and lead to perverse incentives.
During the broader subsequent discussion, we were able to explore some of the core issues that motivate the group, including the tensions between measurement for accountability/control and measurement for learning.
At this session, we had a really interesting and wide-ranging discussion on the role of targets as one approach to measurement and performance management.
Nick Davies presented the Institute for Government's recent report on Using targets to improve public services, highlighting contexts in which targets can improve and damage performance. Nick also shared four of the questions government should ask itself before instituting a target regime, and several recommendations on how to make best use of targets when they are appropriate.
Sarah Albala from the UCL Institute for Innovation and Public Purpose then shared an alternative approach to targets: measuring capabilities. Sharing insights from work on Hackney and Newham's "Preventing vulnerable residents from reaching crisis" pilot programme, Sarah explored an alternative measurement approach based on understanding the development of six capabilities. These capabilities were identified based on theory and stakeholder inputs on what makes an effective welfare system.
Finally, Professor Christopher Hood offered some reflections in response to both presentations. He emphasised that social context matters greatly to the effectiveness of targets, and so even if the approach works in a particular place and time, there is no guarantee it will work elsewhere. He also highlighted the importance of considering targets alongside alternative approaches - rankings, and intelligence for learning - and the challenge of linking targets to resources.
At this session, we discussed linking payment to measured outcomes - a much promoted practice in the last decade that has rapidly fallen out of favour. We began with GO Lab colleagues discussing their recent SSIR article, which argues that payment-by-results and social impact bonds were conceived as a tool for top-down efficiency improvements, but might evolve to become a tool of collaborative governance. Whether that is true, and whether it is welcome, was picked up by three respondents: Max French from Northumbria University, Jenny North from Dartington Service Design Lab, and Joy McKeith from Triangle (who recently published the report Enabling Help, discussed at this year's Social Outcomes Conference).
Session discussed recently published Human, Learning, Systems report.
The report raises important questions regarding the role of 'measurement' in work towards better social outcomes, which the group discussed:
(1) recognising that the primary purpose of measurement should be learning but also exploring what safeguards we need when there is bad performance due to neglect rather than complexity;
(2) acknowledging the limitations of population level data but seeing it as one of many sources of info that can support (holistic and nuanced) learning and decision making about allocation of scarce resources;
(3) recognising the value of 'bespoke' approaches but also not ignoring the evidence base that does exist about effective practice.
We started the session with Andy Brogan, Founding Fellow at Easier Inc, who as a co-author of the report, talked through these issues from his perspective. We followed this with some reflections from Jacqui McKinlay, CEO of the Centre for Governance and Scrutiny, and then heared from group members in open discussion.
The quickest way to get a sense of what the report is about might be to watch Toby Lowe speaking at the launch event (min 7- to min 27). You might also want to direct your attention to the chapter titled "The impact of Human Learning Systems for people” (p239 of the report).
We explored a tool to measure systems change progress. Lewis Haines from Collaborate presented a Systems Change Maturity Model developed as part of work on evaluating systems change for Save the Children’s Early Learning Communities. He was joined by New Philanthropy Capital, who they partnered with in the work, to reflect on the process of developing the model and their experience of using it to understand the development of systems conditions. You can find more detail in the recent blog here.
We were joined by a team from Essex County Council who develop and support the Essex Partners Board – a multi-agency and countywide group of leaders. They talked about their journey to establish the partnership and to tackle major systemic challenges through it, including working with the team at GO Lab to develop a system of feedback and learning.
Below are some meaty questions that we tackled in the session:
In the first part of the session, we heard from Toby Lowe (Centre for Public Impact) and Deborah Blackman (University of New South Wales, Canberra) who talked about everyone’s favorite topic: ACCOUNTABILITY. Deborah and Toby shared with us their latest thinking on how current conceptions of accountability are broken and what our alternatives might be.
In the second half of the session, Tim Hobbs (Dartington Service Design Lab) spoke on a topic near and dear to our collective heart: integrating different forms of evidence and offering his thoughts on a new paradigm for evidence use in service delivery.
We were joined by Richard Croker (LDP Programme manager, Calderdale Council), Ben Williams (Local Pilot manager, Sport England), Alex Potts (LDP Evaluation lead, Leeds Beckett University) from the Sport England funded Local Delivery Pilot in Calderdale, a systems change initiative that aims to tackle physical inactivity. The pilot focuses its measurement at a system level, exploring how to create the conditions for stakeholders to recognise, value and embed physical activity in what they do. This involves evaluating change at a policy, working practice and delivery level.
Calderdale colleagues shared the rationale and benefits of focusing on systems-level measurement, and how they'd gained senior buy in for this, as well as introducing some of the questions they're grappling with in terms of the impact on and accountability to local people, and how to understand the 'ripple effects' of their work.
•How do we measure the 'indirect' impact of Active Calderdale's work (i.e. the 'ripples')?
•How do we efficiently and economically monitor and measure the economic impact of AC's work?
•How do we monitor/measure if an actor values or continues to value physical activity, without AC's support?
The beginning of the session covered four "how might we?" questions which formed the basis for the group's discussions going forward:
Following that, we heard a case study from Lela Kogbara, Celestin Okoroji and Victoria Cabral of Black Thrive and Chris Dayson of Sheffield Hallam University, about Black Thrive's efforts to set up a shared measurement system (SMS). The group discussed four topics:
Jo Blundell and Dawn Plimmer led a discussion around the Seven principles for a new approach to learning and accountability that emerged from the April discussions.
Michael Little from Ratio presented the findings from the learning network they have been running with Lloyds exploring the measurement / accountability question.
Mix and match: comparing motives and means for measuring social outcomes. You can read about the topic and speakers on the GO Lab conference website here and you can watch a recording of the session below.
Listen to the audio recording of the session.
The group met for the first time for a discussion aimed at exploring the tension between demonstrating value and effectiveness in using public resources, and learning and adapting to address long-term systemic challenges. You can download a summary of the discussion by clicking below.