This guide offers an introduction to pricing outcomes for social impact bonds and outcomes based contracts. It is to be used by contracting authorities and outcomes payers.
Pricing outcomes effectively is fundamental to developing a social impact bond. As paying once outcomes have been achieved is very different to other types of programmes such as grants or fee-for-service, this may be a new area for commissioners. This guidance will equip you with the knowledge to develop a payment mechanism for your SIB, as well as covering the considerations you need to take. It will be accompanied by examples and four strong case studies in the final chapters.
Before reading this guide you may want to familiarise yourself with our other technical guidance for designing and implementing SIBs. These will be referenced throughout the guide:
There were many people who contributed to this guide through sharing their expertise and experience and we are greatly indebted to them. These people are Vaby Ellis, Lorcan Clarke, Tim Gray, Neil Stanworth, Cat Remfry, Mila Lukic, Tara Case, Louisa Mitchell, Tanya Gillett, Alison Bukhari and the entire GO Lab team. Thank you!
7 mins read
An outcome based contract (OBCs) is a contract in which government agencies responsible for public services agree to pay an external provider based on the achievement of specific social and/or health outcomes. For instance, a contract for the provision of training to support unemployed people into work may have payments linked to the proportion of trained individuals that find employment and maintain it for a minimum period.
An outcome based contract can be underpinned by a social impact bond (SIB), where a third party investor provides up-front working capital to fund provision prior to outcome payments being made, and takes on the risk of non payment if outcomes are not achieved.
Designing an outcome based contract requires contracting authorities to define ‘what good looks like’ for an individual or group of individuals, i.e. the desired outcomes. This is often done in consultation with providers. You must then construct a contract which aligns payment to those outcomes.
The idea is simple and powerful, but it requires a clear definition of who is affected by the social issue you aim to address (the cohort), what ‘good’ looks like for those people (the outcomes), and how much to pay if that is achieved (the price). These three considerations – cohort, outcomes and price – are all inter-related. Changing one will affect at least one of the two others.
These three considerations underpin a ‘payment mechanism’. Whilst it can seem daunting, it builds on principles that will already be familiar to a contracting authority and many of the approaches we discuss in this guide are useful beyond the design of outcome based contracts. The disciplines required can be usefully applied to the pricing of contracts in many different circumstances. And as described in our guide on Setting and Measuring Outcomes, many can also be applied in circumstances where it is desirable to measure outcomes, but you do not want to move immediately to outcome based payment. In this guide we aim to create a framework to support a contracting authority in structuring their analyses.
To turn these three considerations into something sufficiently detailed to underpin a contract for outcomes, there are other considerations that need to be described. In articulating a payment mechanism, a contracting authority will be able to say:
For instance, in the Manchester Treatment Foster Care social impact bond, it translated to:
As you can see in the diagram below, these seven components relate directly to the three inter-related considerations described earlier.
This guide will cover a range of ways that you can price outcomes, in simple terms it will:
As this is a complex topic with many components involved it will be organised in the following way. If you are viewing the guide online, you can navigate the guide using the drop down menu at the top of the page, or the links that will jump to the correct section. You can also download a printable PDF version of the guide using the button at the top.
1) An overview of payment mechanisms
2) Three starting points for pricing an outcome:
a. Intrinsic value
3) How to adjust the price to account for:
a. Cohort specification (who the service is intended to serve)
b. Level of improvement (i.e. what good looks like)
c. Timing of payment
d. Additionality (accounting for what would have happened anyway)
4) The importance of market engagement and testing
5) Dealing with uncertainties:
a. Difficulties predicting the likelihood of success
b. The risk of paying more than you can afford if the contract is extremely successful (i.e. defining a contract cap)
6) Useful resources and links
7) Case study 1 - West London Zone social impact bond
8) Case study 2 - Ways to Wellness social impact bond
9) Case study 3 - Essex multi-systemic therapy social impact bond
10) Case study 4 - Educate Girls development impact bond
Pricing outcomes is an iterative process, meaning you will likely revisit decisions several times prior to signing the contract, slowly refining your answer each time. The first task is to define the social issue that you aim to tackle. One way to define a social issue is to look at where the current outcomes are less than desired for a clearly defined group of people (the ‘eligible cohort’).
A theory of change, or logic model, can be a useful tool to describe the social issue you are considering and how people may benefit from the service or ‘intervention’ the provider will deliver.
A theory of change describes how ‘inputs’ (i.e. resources such as money and staff time) are used to deliver ‘activities’ (e.g. recruiting participants and setting up a mentoring scheme), which produce outputs (e.g. beneficiaries participating in the mentoring scheme), which will lead to desired outcomes (e.g. improved engagement at school) and impact (e.g. better education results and life prospects).
The theory of change tool is useful in the processes of setting and measuring outcomes and evaluation, which are each explained in separate GO Lab guides.
A clear theory of change will help you in developing a good understanding of the social issue you aim to tackle, what inputs and activities you believe are currently being put towards tackling it, and how the intervention may change how participants ‘flow’ through the system and achieve better outcomes. In particular, you will have to think about how many people you wish to target, how many you expect to actually engage with and how many you expect to achieve better outcomes.
The theory of change can also help you make sense of the different components of the payment mechanism described earlier.
As we will see later on, these considerations will be important in making adjustments to the price.
A theory of change is not a perfect tool. Its major limitation is that by focusing on one particular group of people, and considering a single (or limited) ‘intervention’ which you hope will help them achieve the desired end outcome, you risk under-emphasising the wider effects and impacts of other groups of people and other ‘interventions’ (or circumstances) in their lives. For example, a housing programme focused solely on providing accommodation may not be effective for a certain cohort of rough sleepers if they also face substance misuse issues which remain unaddressed. However, a Theory of Change is a well-known concept and provides a useful tool to analyse a problem, and assess how effectively it might be tackled.
Regardless of whether you will proceed with an outcomes-based contract, the exercise described here is a helpful exercise to inform thinking about better ways to meet the needs of populations. You are articulating your understanding of what is currently happening to the ‘eligible cohort’, and thinking about how yourselves or others doing some work with them (an ‘intervention’), their outcomes might improve. As we will see later in the guidance, this understanding will be particularly useful to inform your early engagement with the market (providers and, in the case of SIBs, investors) and will eventually influence the price of the outcomes.
It is worth noting that this is not the only tool that should be used in order to effectively manage your outcome based contract. It is important to distinguish between:
Also, you may wish to consider a payment structure that combines elements of a ‘fee-for-service’ contract and elements of an ‘outcome based’ contract. For instance, you may agree initial payments for activities (such as engagement and assessment) and later incentivisation payments subjects to the achievement of desired, longer term outcomes. This has the effect of reducing the degree of financial exposure of the provider – the period over which they are incurring costs before payments start to flow. This is further explained below, under the ‘timing of payment’ section.
There is not a great deal of evidence on the best balance between fee for service and outcome payments, and is likely to vary depending on the policy area. In general, though, any fee-for-service element of a contract requires a specification based on activities, not outcomes, so can negate some of the benefits of an outcome based contract, such as greater flexibility for the provider in how they deliver the service, and the financial incentive to focused on specified outcomes. When you go through the process of setting and measuring outcomes, as described in our guide, you may identify intermediate outcomes or even outputs that are suitable for payment, which have a similar effect for the provider as a fee-for-service payment, without requiring the specification of activities.
15 mins read
A. Intrinsic value of the outcomes (quantifying the value to society)
B. Efficiency through performance enhancements (same outcomes at lower costs, or more outcomes at same costs)
C. Prevention (future cost avoided/cost savings)
In theory, the maximum price of an outcome should reflect how much we value that outcome, as a society. This theory is the perspective taken towards appraisals and evaluation for government projects, as described in the UK Treasury’s Green Book.
In practice, however, outcome based contracts and SIBs have also been used for other pragmatic reasons such as generating efficiencies (i.e. finding more economical ways to deliver the same outcomes at a lower unit price), or reducing future costs (i.e. investing in prevention to reduce demand for services.
We will describe how to use each these three starting points: intrinsic value, efficiency, and prevention. But there are often multiple reasons to use an outcome based contract –these approaches should not be considered mutually exclusive, but overlapping. There may be also be a wider set of factors driving the decision to use an outcome based contract, and not all of these will be captured in the pricing approach.
In any of these approaches, there are normally inherent upper and lower boundaries to what the total amount that the contracting organisation should expect to spend. The outcome price will be set somewhere between within these boundaries.
The upper boundary for total outcomes payments should be what contracting authority determines those outcomes are worth. This could be due their intrinsic value, their value compared to current practice, or the value of future costs avoided, as per the approaches described below. Any higher than this, the programme is not delivering value for money to commissioners.
The lower boundary for the total outcomes payments should be equivalent to the amount that the service is expected to cost to deliver if it achieves an acceptable level of success. Any lower than this, the programme does not make sense for providers/investors to participate in.
These upper and lower boundaries can be visualised as in this illustration:
If the programme does better than the minimum acceptable level of success (i.e. it delivers more and/or better outcomes), then total outcomes payments will rise and the provider (and/or investor funding the service) should expect to make a surplus or return.
If the programme does worse than the minimum acceptable level of success, then the contracting authority will pay less, but the programme will not normally be viable if this continues, as providers/investors will lose money.
The goal of the contracting organisation should be to achieve the best value for money you can, whilst also ensuring that a programme is viable for providers and investors. This should include generous enough payments to support work with harder to help members of the cohort, and accepting that the risk taken by providers/investors of not succeeding needs to be balanced by the potential to make a surplus if they do better than expected.
The intrinsic value of an outcome is an attempt to put a figure on the amount, in monetary terms, that society values a particular outcome. The value to society is commonly defined in moral and political terms, but economics has a tool that attempts to quantify it, known as Social Cost-Benefit Analysis. It is widely used across government to appraise spending options and involves calculating the value to society of achieving the outcome. The value to society as a whole is likely to be much higher than the value to one particular part of the public sector – and correspondingly, much higher than the outcomes price. The figure is often used in business cases, but in certain circumstances can also provide useful analysis to inform outcome price setting. See the UK Treasury Green Book for more information.
Social Cost Benefit Analysis (SCBA) is a type of analysis which quantifies, in monetary terms, as many of the costs and benefits of a proposal as possible, including items for which the market does not provide a satisfactory measure of economic value. When used in options appraisal, the benefits (in monetary terms) are compared to the anticipated costs, to decide to what extent the option provides ‘Value for Money’ i.e. whether the financial benefits exceed the costs.
When using SCBA to price outcomes, the emphasis is on quantifying the benefits, in order to reveal what price you are willing to pay whilst still achieving value for money.
Here, we consider the steps for an outcome payment for an individual. If you are considering a cohort-wide outcome measure (and corresponding price), you will need to aggregate the individual figures to give a total value.
The steps are to:
It is almost impossible to assess all of the possible impacts arising from the achievement of a certain outcome, so you should ensure your approach concentrates on the most important and immediate ones. It can be useful to start thinking about who the outcome might impact upon:
You will need to quantify each of these impacts as well as being able to describe what the impact looks like. Sourcing this information can be difficult; ideally the scope and scale of these impacts will come from research done on the cohort you are aiming to target. Otherwise a contracting authority and providers might be able to provide data or feedback on the types and/or quantity of impact. Alternatively, you may need to rely on published research for similar cohorts.
These impacts will fall into two broad categories: Fiscal benefits and Economic/Social benefits. Fiscal benefits are impacts on the public sector budget which are either ‘cashable’ (where expenditure released from the change in outcomes can be reallocated elsewhere, e.g. no longer needing to spot-purchase residential children’s care) or ‘non-cashable’(which represent a benefit to the public from freeing up resources even if public expenditure is not reduced, e.g.. reduced demand from frequent attenders in A&E allows staff to concentrate on more critical cases). Economic/Social benefits are wider gains to society and are often harder to identify and attach a monetary value to.
Once you have sketched out the impacts arising from achievement of the outcome you need to attach a monetary value to these.
Finally, you should reflect on these figures to decide the total value of the outcome, and this total should represent the upper limit of public sector’s ‘willingness to pay’ for achieving that outcome (although in reality it is unlikely the analysis has been able to capture and monetise all the impacts).
If you are one of many public sector organisations identified in the analysis, you may wish to price an outcome based only your own fiscal benefits . This approach is described below under approach C, Prevention (future cost avoided/cost savings). Alternatively, you may wish to use the analysis to engage other public sector contracting authorities in order to price the outcome in accordance with realised benefits for them.
This method will leave you with a good understanding of the wider impacts of the outcomes and can help construct the case for co-commissioning or facilitating cross-agency working. For example, this approach can be useful for contracts which aim to meet the needs of individuals who move through different existing services which do not work well for them under the status quo, such as rough sleepers. It can also provide a starting point if a service is completely new.
Outcomes-based contracts and SIBs developed on this basis do not have to pass the sometimes very high hurdle of proving cashable savings, and especially cashable savings within a short time period which directly accrue to the contracting authority. However, the approach is only feasible if there is a budget available to pay for the outcomes which does not depend on actual or notional savings. For this reason, the approach may be more suited to central government departments than to local public sector organisations, unless it is being used to illustrate the benefit of local co-commissioning.
It can be difficult for teams with little or no analytical resource to make use of the tools of social cost benefit analysis. Furthermore, if the analysis has not used local data, but relies on standard datasets, it may undermine trust in the resulting figures.
It is possible to commission services through an outcome based contract using this pricing approach, and then run an evaluation which makes a robust assessment of the range of benefits which it was expected to deliver. For example, a SIB designed to reduce rough sleeping might tie payment only to that – but the price paid may also have factored in expected reductions in crime and health service costs. An evaluation can show if these were achieved as expected, and therefore whether the price paid was justifiable.
As quoted in the CBA guidance for local partnerships, ‘’CBA is not an exact science and its outputs are a guide to decision-making, not a substitute for thought’’. You are deciding what an outcome is worth, based on something other than what it has cost to achieve - such outcomes in the past. However, you are still likely to want to take the likely delivery costs into account.
The following is an example from the CBA Guidance for local partnerships.
Here the analysis has identified that the main impacts are to the individual who overcomes drug abuse and to a number of public sector bodies; NHS, MoJ and LAs. For the individual the main benefits are improved health and cost saving from no longer purchasing drugs. For the public sector the greatest fiscal benefit falls to the NHS with a unit cost of treatment of £2,136 followed by the justice system with an impact of £1,495 in reduced demand on criminal system. There are also impacts the analysis has not attached a monetary value to – the reduction in fear of crime and improved desirability of the local area.
As a contracting authority undertaking this analysis you may now decide that an outcome focused on reduction in drug abuse is ‘worth’ £16,399 to society. Of course, using this figure for practical purposes is problematic. It is unlikely that an individual commissioning body would be willing to pay anywhere near this amount, and even if co-commissioning was used (itself often challenging) the wider economic benefits falls across the whole of government and society.
This example is from Social Finance and the full report can be read here - rough sleeping SIB report
In the GLA Rough Sleeper SIBs the analysis concentrated on the fiscal impacts of rough sleeping across a range of public sector organisations rather than the wider social impact. The payment metrics were then designed with a cohort member cap of less than £20,000 to ensure that outcomes would release savings into the wider system.
Note that this approach will have some cross-over with the ‘Prevention’ approach as some of the impacts you assess will be linked to impacts and costs arising elsewhere in the system/further downstream.
This approach focuses on achieving efficiencies or enhancing performance to provide better value for money. It is relevant if you believe that it is possible to achieve better value for money from current service provision, i.e. if you think it is possible to achieve the same outcomes for less money than a current service, or better outcomes for the same money.
To use this approach, you need to have (or develop) a good understanding of how much you are currently paying for the outcomes you are achieving. There may be instances where you are currently commissioning a service on a ‘fee-for-service’ basis, and feel that while the service is being delivered in line with the service specification (or equivalent), it is not producing as many good outcomes for individuals as you think it could. Consider the following illustration.
In a ‘block contract’ or in a ‘fee-for-service’ arrangement, you may be paying £10,000 to engage 100 participants in the training. This is £100 per participant (£10,000 divided by 100 participants). You pay this regardless of the success of the programme in achieving the set outcome. However, if only 20 of them achieve the desired outcome, the price per outcomeis £500 – that’s £10,000 divided by 20 participants. This price per outcome (£500), and not the cost per participant (£100), is the relevant benchmark in defining the outcome price if your aim is to drive efficiencies.
A contract can be more efficient by achieving the same outcomes for less cost. For instance, it may be possible to deliver a similar intervention, engaging 100 people with 20 of them achieving the desired outcome for £8,000. A provider who can deliver at this reduced cost, would be able to accept a price per outcome of £400 (that’s £8,000 divided by 20 participants).
Alternatively, a provider may run a more effective intervention for the same cost of £10,000, perhaps because they have new or more effective techniques, or simply a sharper focus on the desired outcome. They might thus expect 25 people to achieve the desired outcome, rather than 20. This provider may also accept a price per outcome of £400 (that’s £10,000 divided by 25 participants).
If the new service is correctly priced, this approach will allow you to demonstrate whether you will save money compared to previous spend, deliver more outcomes, or both.
In this approach it is possible to work within existing budgets and, if desired, to cap payments to no more than the previous spend.
This approach is often easier to pursue if an existing service is being re-let, as that gives a ready comparison. This can sometimes mean there is a resistance from any stakeholders with an interest in maintaining the status quo. However, it may be possible to transform the level of performance by introducing an outcomes focus, depending on how effective the existing service is.
As there is already a service in existence to compare with, then a lot of the uncertainty about referral rates, level of need of the client group, and to some extent level of anticipated success, are reduced. As we discuss later in the section “dealing with uncertainty”, in the case of SIBs this may also imply that investor returns could be lower.
This method assumes (or will lead you to develop) a good understanding of the costs of current service provision, and how successful it is in achieving the desired outcomes. These can both be difficult to assess. You may be constrained by a lack of reliable information to indicate both full costs and current outcome achievement for existing services.
Current costs may be difficult to assess in full as they include costs that may be difficult to apportion (e.g. managerial and other staff cost, including ‘overheads’). In an outcomes contract or SIB, there will be costs built in for investor returns if the service achieves its aims, and sometimes for increased performance management and data collection. But these costs are worth paying if performance is improved enough compared to the full cost of existing service provision, especially when considering managerial / overhead costs which are often left out.
This approach can be more prone than others to creating too narrow a focus on achieving the specified outcomes rather than meeting a wider range of needs, because other approaches often need to measure the wider benefits in order to quantify them.
This approach focuses on moving provision upstream, to prevent future social problems. It uses a similar form of analysis to the ‘intrinsic value’ approach, but with a narrower focus.
Preventing social problems from arising in the first place is often likely to lead both to better social outcomes for people, and lower public sector spending. However, it is difficult to allocate resources to preventative activities when there are budget pressures, and people in need of urgent care to be supported first. Furthermore, the difficulty of knowing whether the preventative activities will actually deliver their demand reduction promises challenges the ability to invest in long-term preventative activities.
An outcome-based contract may help enable a focus on preventive activities that will lead to better future outcomes, and reduced demand (and therefore costs). It also has the potential to enable ‘double running’ of budgets, where preventative and remedial work is run in parallel, provided the demand reduction unlocked by the preventative work is able to generate genuine future budgetary savings.
In the case of secondary prevention, there may be some clear cohorts of citizens who are already negatively affected by social issues and incur considerable cost to public services – and there may be interventions that have the potential to improve things, or at least prevent them getting worse. For example, children in residential care could be moved to foster families, saving money and typically improving outcomes for the child. Adults in residential care settings might be helped to achieve greater independence, with similar effects. Or people with long-term health conditions might be helped to better self-manage these, reducing their reliance on healthcare services.
Opportunities for primary prevention – stopping negative outcomes ever occurring – can be harder to identify. The starting point for this is to identify the underlying causes (the ‘root-causes’) of social problems, so that interventions can be sought to tackle these causes.
A basic root-cause analysis consists of:
On the basis of a root-cause analysis, a contracting authority will be better able to understand the cost and outcome implications of intervening at different points along an individual’s journey through the system.
The approach has many similarities with the intrinsic value approach (approach A). However, this approach is narrowed in focus: rather than trying to comprehensively analyse and put a monetary figure on every single wider benefit of an outcome being achieved, you are focusing only on the outcome which represents a social need avoided, and only on the money saved from avoiding that problem to the contracting authority or authorities. The distinction between ‘cashable’ and ‘non-cashable’ savings made earlier is relevant here, with the focus being on cashable savings.
In fact, a root-cause analysis may highlight that the public body responsible for contracting out a preventative intervention may not be able to afford it because the financial savings from the social problem being prevented may accrue to a different organisation, or multiple other organisations. In this case, either the other public body or bodies need to be brought in as a contracting authority, or a different approach to pricing the outcome needs to be employed.
Like in the intrinsic value approach described in A, root-cause analysis requires analytical skills, but many resources are available.
Working to understand the cause of particular social problems will help a contracting authority to listen to those in need for support in the community. It will help you to articulate a strategy that links actions to long-term desired social outcomes and developing a monitoring framework for ongoing learning.
It is often not at all clear what the most effective interventions are to achieve prevention of undesirable outcomes, and the best methods may vary by segments of each client group with different characteristics, or even by individual. Furthermore, interventions may need to be iterated and/or adapted during the programme as it becomes clearer what works. Paying for outcomes avoids some of the difficulties which would therefore arise in trying to specify defined interventions, and encourages services to be responsive to the changes in circumstances of each client.
Depending on the issue being addressed, the time delay between the intervention and the achievement of medium to long term outcomes which are of most value to the contracting authority may make an outcome based contract unfeasible for a provider and/or social investor. This is because the financing gap created by the need to pay for providing services up-front, but paying for outcomes later, would be too large. It may also be very difficult to properly evaluate the risk that the desired outcome will not be achieved.
These issues can lead to either a big increase in the outcome payment that the provider market demands (to compensate for the financing cost and risk premium caused by the time delay), or the use of proxy outcomes which may be shorter term and less risky, but may not correlate well with the longer term outcomes which are of real value.
There can be a problem in this approach with "prevalence”: the number of people in the target group who display (or might go on to display) the undesirable results which the intervention is intended to mitigate (or avoid). As explained later under “cohort specification”, this problem can be addressed by careful definition of the cohort and the process of referring people to the intervention to ensure that it is targeted at those who truly need it – and when this is difficult, by reducing the price offered.
There can also be a problem with “deadweight” which is the term often used for those who would have achieved a positive outcome even without the intervention. This is explained in detail in the “additionality” section below.
11 mins read
As outlined at the beginning of this guidance, setting the price is just one aspect of creating a payment mechanism. (See Figure 1.2) The price you set will interact with other aspects of your outcome based contract, which is why the approaches described in Chapter 2 are only starting points, not the final answer. You need to strike a balance between multiple considerations in order to create useful incentives to achieve outcomes that don’t make a contract impossibly expensive or risky for a provider to deliver. When doing this, it can be very helpful to discuss with others who have faced similar questions. The GO Lab has a range of activities that supports peer learning, including regional workshops and events, as well as the SIB Knowledge Club.
As a basic rule-of-thumb, cohorts that are harder to help or have further to ‘travel’ in order to achieve the desired outcomes will be more costly to work with, as they will need more intensive support. As we describe in the likelihood of success section below, harder-to-help cohorts will also tend to be less likely to achieve the outcomes than easier-to-help ones. For such cohorts, the provider will need to engage with many more people than the number they expect will achieve a successful outcome. This increases costs. These increased costs need to be compensated through the payment mechanism.
Cohort specification will enable you to (i) define a specific cohort of people with similar needs and (ii) estimate variation in difficulty-to-help amongst beneficiaries. How easily can you describe the characteristics of those for whom you are commissioning the service? The more targeted and similar the cohort, the most straightforward the approach to price setting can be.
With a cohort who all have comparable levels of need, there are also fewer ‘perverse incentives’ (for example, there is less risk that a provider may ‘cherry-pick’ and only work with ‘easier cases’). This makes knowing how much to pay easier: it can be discovered through market engagement or by examining the cost of in-house provision.
In many cases, however, it will be difficult to identify a cohort of sufficiently similar people, especially for services aimed at supporting people with complex needs. Often a cohort will be large and diverse with broad a range of needs and potential intervention / support packages, which increases the risk of ‘cherry picking’. In these cases, it is still possible to reduce the risk of ‘cherry-picking’, in three ways:
A key part of defining payable outcomes is setting a point of improvement from a baseline at which payment is made. This can be referred to as the ‘threshold’, ‘target’, ‘metric’, ‘milestone’, or ‘trigger’ at which the payment outcome is deemed to have been achieved. We will use the word ‘target’ here. Essentially, it means defining ‘what does good look like’ or ‘what is a meaningful improvement’? The basic rule of thumb here is that the greater the level of improvement desired within the cohort identified, the more costly it is likely to be for providers to achieve the targets, as they will need to offer more intensive support. In the last section we discuss ways to tackle uncertainties around a provider’s likelihood of achieving the level of improvement set out.
The longstanding discussion in education about how to measure learners’ performance is perhaps a helpful analogy to think about the forms these targets can take. The question is whether student performance is best measured by attainment / proficiency (meaning a student’s performance against a universal benchmark at a given time), or progress / growth (meaning a student’s performance improvement or decline over time, relative to the average or their own starting point). Proponents of progress or growth scores say that using attainment or proficiency scores encourage teachers to focus less on those students who fall far below the attainment threshold, and unfairly stigmatises schools whose intake has more of these students. They argue that we should be using progress or growth measures if the goal is to assess schools on how well they serve students, not on which students they serve. In the UK, the long-standing but now abandoned ‘A*-C’ measure of GCSE grades is an example of how an attainment-based cut off can have these effects, as it encouraged schools to disproportionately focus resources on students on the C/D borderline, at the expense of those expected to get lower and higher grades.
In the world of defining outcomes, the same logic holds true. You can think about a fixed, binary targets – like an ex-offender not re-offending during a set period, or someone who is homeless living continuously in accommodation over a set period – as though they are attainment scores. Indeed, evidence suggests that they focus providers’ attention on beneficiaries who are around the cut-off point, often at the expense of individuals who have further to travel. For instance, a binary payment for ‘not re-offending’ would incentivise providers to work with offenders who only offend few times to bring this down to zero, rather than working to reduce reoffending rates for people who have a history of many offences (without being able to eliminate them entirely).
Targets that reflect the ‘distance travelled’ may be more accurately estimated and reduce these risks, but it can be more challenging to measure them – typically requiring more granular and ‘sensitive’ measures or measurement approaches. In taking this approach, you will need to either (a) measure, and take into account, an individual or cohort’s starting point, and determine an acceptable amount of progress to have made by the end; and/or (b) show degrees or ‘steps’ of improvement, and add extra payment targets to reward these. Either of these approaches adds complexity, but it is key to think about them and to create a solution that addresses these points.
You can read more about tackling these issues in our guide to setting and measuring outcomes.
The cost implications of the timing of payments is all to do with the fact that an outcome based contract creates a financing need for a provider organisation. A provider provides a service upfront but is not paid until later, when outcomes are achieved – so they have to use their own money, or borrow it from someone else (like a social investor). The effect of different timings on this is best illustrated with a simple example:
The same basic principle applies whether providers self-finance, take out a loan, or receive backing from a social investor who pays them a traditional service fee and takes the financial risk on themselves. The longer the provider or investor has to wait to receive payment for the delivering of outcomes, the higher the strain on their finances, and the greater the cost.
You could add an additional earlier payment, for example at the end of 6 months, for any participant who has entered accommodation. This is not really the goal you are looking for – as they may soon return to sleeping rough – but, it means your provider would expect to get some income sooner, so will go less into the red, need to borrow less money to run the contract, pay less interest, and require a lower amount to be built into the contract cost.
In addition, you may as discussed earlier in this guide, consider payments for activities as well as for outcomes. This will have much the same effect in enabling the provider to get some payment sooner, but at the possible expense of reducing the focus on desired outcomes, or alignment to the policy objectives.
In short: outcomes which are paid early on in the delivery phase can lower the cost of financing, but may lessen the focus on longer term outcomes that are usually more aligned with the overarching policy aims of the project.
Note that a greater focus on longer term outcomes, as well as increasing the financing amount, can also increase the provider’s (and/or investor’s) perception of risk. This can result in an increase in interest payments or financial return expectations. This is discussed further in the next section, ‘likelihood of success’.
In projects which feature both early and later-term outcomes, you need to be aware that your payment structure may create incentives for providers to treat these outcomes ‘interchangeably’ and focus solely on delivering short-term outcomes for a higher number of beneficiaries. There are ways to mitigate this: having a limit on the total number of participants whom the provider is allowed to work with within an overall payment envelope (or “contract cap” – see below), and/or higher payments for longer term outcomes.
Public sector commissioners can often only pay money in the year they have the money budgeted for. So it is important to profile the expected level of outcomes payments in each financial year, and if possible build in some flexibility to move money between financial years if things don’t turn out as initially envisaged.
“Additionality” refers to an impact that is “over and above what would have happened anyway”. You could describe it as “over and above business as usual” or “what we currently expect to happen”. A description of what would have happened anyway is known as the “counterfactual”. Determining the level of “additionality” in a robust way helps to show that any positive effect was indeed caused by the work that was done – this is the concept of “attribution” (i.e. it shows that the outcome is “attributable” to the intervention).
The answer to “what would have happened anyway” is very rarely “nothing”. For instance, in projects aiming to support people back into work, an obvious outcome payment is sustained employment. However, some of the participants might have found an employment even without the intervention. The amount of this natural improvement that takes place is called the “deadweight”.
Unfortunately, it is never possible to observe “would have happened anyway” unless you know how to create a parallel universe, and so the best you can do is to estimate it.
If you are running a quantitative evaluation alongside (or as part of) the contract, then it is feasible to think about using statistical tools to establish “additionality”, such as randomised-controlled-trials or similar “quasi-experimental” techniques. These tend to use some sort of comparison group who share similar features to the cohort being worked with, but who do not receive the service. We explain these techniques in our introduction to evaluation. For instance, the Peterborough SIB and the Ways to Wellness SIB estimate additionality by using a comparison group technique.
If you are using one of these techniques as the basis for measuring the outcomes you are paying for, you will have a good degree of confidence that those outcomes are attributable to the intervention. The risk of paying for things that would have happened anyway will be reduced, and there is no need to adjust the price set at the start to account for these considerations.
Often, it is not practical or affordable to use a comparison group to estimate additionality as part of the outcome measurement approach. When this is the case, you should think about factoring a prediction of what the deadweight might be into the price you offer. If the contract is using a “proven” intervention, there may be previous research and/or evaluations that will give you a reasonable indication of the likely level of additionality for that intervention. Alternatively, you can estimate additionality this by articulating a “business as usual” scenario, and comparing it to various success scenarios for the new service. If you can access the right data, you might do this through an analysis of historical trends for the particular cohort that is eligible for the intervention, and project those trends in the future. There will be a degree of uncertainty and you might include a range of scenarios which account for high- and low-estimates.
Be aware that this extrapolation into the future may be inaccurate, especially for longer-term outcomes, as other external factors could influence the trends positively or negatively – such as a change in the economic outlook or in other areas of policy. In defining additionality using past trends, you could end up paying for outcomes that would have occurred anyway, because changes outside the control of the provider are making the outcomes easier to achieve. The reverse is also true: you may end up not paying for outcomes that the provider has legitimately achieved, because changes outside the control of the provider are making the outcomes more difficult to achieve than they would have been in the past. As we explain in our Guide to Setting and Measuring Outcomes, you can mitigate this by aiming to set outcome measures and targets that are less susceptible to such external factors, but it is difficult to eliminate this risk entirely.
The value of these projections is not in getting a perfectly accurate estimate of what will happen in the future. It is to focus the discussion on what we expect to happen, to make our assumptions explicit, and to improve our ‘mental models’ by discussing them openly with other stakeholders.
As a general rule, you should adjust the price of outcomes downwards if you believe some outcomes would happen anyway (i.e. there is “deadweight”) but you don’t have confidence that you will be able to accurately determine how many (which you might do by measuring a comparison group). This is a legitimate measure to avoid paying for things that would have happened anyway, and to ensure good value-for-money to taxpayers.
3 mins read
As discussed earlier in this guide, pricing outcomes in an outcome based contract requires the contracting authority to first form a clear idea as to the value it attaches to a set of outcomes, and then adjust the price of those outcomes depending on the complexity of need within the target cohort, the level of improvement sought, the timing of payments, and the level of confidence in determining additionality. For an outcome based contract to work effectively it needs to strike a balance between what is valuable to the contracting authority and what is possible from a service delivery perspective, and what is acceptable from a financial risk perspective. That is why it is important for contracting authorities to engage with providers and (in the case of SIBs) investors early on, and remain open to refining the price of outcomes in light of the feedback from the market.
As will be shown in the next section, engaging with the market is a key tool in dealing with uncertainties around what can be achieved.
There are no hard and fast rules around when to engage with providers and investors. Much will depend on the individual circumstances of the project being developed. As a broad principle, it is helpful to get early feedback on your pricing approach. Before engaging with stakeholders around pricing, however, a contracting authority should have a rough idea of the value of the outcomes they are seeking, and a view of the key parameters as discussed in the ‘Adjusting the price’ section above (target cohort, level of improvement sought, timing of payments, and approach to determining additionality). This helps to give potential providers something to respond to and means both sides will get more from the interactions. (The major exception to this would be if a project is co-designed, or is being led by a provider who is proactively seeking a partnership with the contracting authority).
Soft market testing will give a contracting authority the opportunity to test the assumptions they have made around the ability of providers to deliver for the price set, including their ability to finance the upfront costs of delivery. This would be through reserves or loans in a two-party outcome based contract, or using investors in a SIB, in which case the appetite of this third party to back the project needs to be factored in too.
One of the main priorities for a contracting authority will be to ensure that the engagement with providers and investors results in an agreement that both gives the greatest chance of achieving impact, and delivers the best possible value for taxpayers’ money. To do so, a contracting authority should ensure that they are well prepared for the discussions with the market by having done their own analysis. As already mentioned, they should have a robust understanding of the impact of the different factors described in the ‘Adjusting the price’ section.
Often contracting authorities see a benefit in treating the contract more as a partnership than a transactional relationship, making engagement with the market is especially important. While early discussions are invaluable in designing a robust payment mechanism, there will of course be limitations around how much providers and investors can share and some information might be commercially sensitive. A contracting authority should be clear upfront as to how the information shared will be used, and agree terms of sharing information with all those engaged.
A partnership approach (especially if it is to work in the long-term) requires trust and transparency, and will not work if purely transactional. That is why it is important that all parties conduct the negotiations in a spirit of openness and honesty, and are clear about the basis on which information is shared.
For the market engagement to be meaningful, a contracting authority should be prepared to allow room for revisions (prior to procurement) and negotiations (as part of the procurement process to appoint provider(s) and/or investor(s). Having this flexibility is instrumental to building an effective relationship with the stakeholders. It is important to allow sufficient time throughout the project development process for these conversations to take place, and a contracting authority might find it helpful to set the right expectations within their own organisation around (i) the timeframes for completing the work, (ii) the need to have some flexibility in the negotiations with the provider(s) and investor(s) and (iii) the fact that numbers are indicative in the early stages and subject to change during procurement negotiations.
‘A contracting authority should approximate their expected outcomes triggers and associated payments early on, but remain open to refining them through dialogue with the provider and/or investor. Including a learning and discovery phase in the contract can also allow for the early learning to refine the payment mechanism, although a process for this needs to be documented in advance to ensure it isn’t a chance for either party to ‘shift the goalposts.’ Katy Pillai, Big Issue Invest
5 mins read
An outcome contract introduces uncertainties into the contract payments as these are linked to future outcomes, which are not known upfront. This may pose a challenge in projecting financial commitments. In this section we consider two aspects of this uncertainty. First, we will discuss how you may understand the uncertainties by exploring different scenarios of the provider’s likelihood to succeed in achieving outcomes, and how this understanding can help you to engage with the market. Second, we will discuss ways of setting an upper limit on financial commitments by describing how contract caps may be used.
All stakeholders in an outcome based contract need to understand how likely it is that the project will achieve the proposed outcomes. If outcomes seem harder to achieve, either because the cohort is difficult to help or because the desired level of improvement is high, then the probability of achieving outcomes will be lower, and the contract will be deemed riskier – and risk demands compensation. In the case of SIBs, social investors backing bids to provide working capital to finance the contract will take this into account in the level of financial return they expect – higher risk calls for higher potential returns.
There are a number of ways to make future performance projections and you will want to use a combination of all of these. To have a goal in mind, it is helpful to identify a range of values. For instance, you can estimate a ‘minimum expected scenario’ (sometimes termed ‘base case’), a ‘best case scenario’ and a ‘worst case scenario’.
Using historical data– If you have well documented data and a good historical data record, these would be a good starting point for a first estimate of the likely success of the project and how many outcomes are likely to be achieved.
Using existing evidence / academic research– in some cases, there will be existing evidence or academic research indicating how successful a particular programme or approach will typically be. For example, the Ways to Wellness social prescribing programme used the results of a pilot programme carried out by Nesta to predict how successful the project would be.
Running a procurement process with dialogue– by running a procurement process that allows for dialogue with multiple providers, you can compare competing claims on likelihood of success, and whether the estimates you are being given are based on robust assumptions. While providers will naturally want to show they have the greatest chance of success, it is important to assess how realistic the prediction is – or whether it seems overly optimistic. Please refer to the Procurement and Contracting Guide for more about the different procurement approaches that allow for this sort of dialogue to take place. Generally, the revised 2015 regulations allow a great deal of flexibility in how the market is engaged throughout a procurement process, which it is worth taking advantage of.
Using the expert judgment and data of a provider – in some cases, you may be procuring a completely innovative service, or working with a new cohort who have not been previously identified or worked with. In these cases, you may be using an outcome based contract because you are not able to determine the likelihood of success and need to rely almost entirely on the projections of a provider, which you will want to test the rationale for. The risk is higher in these experimental scenarios, though the contract enables some (or all) of the financial risk to be transferred.Use a learning contract or pilot period - You could procure services anticipating an initial period where you closely monitor the implementation of the intervention and improve your understanding of what level of outcomes it is feasible to expect. This would also allow you to identify the key barriers and strengthen your performance management system and payment metrics. At the end of this initial phase you can firm up your payment mechanism in collaboration with other stakeholders. There are examples in the UK of approaches like this being used. You still need to have a well reasoned baseline scenario before starting the contract, and should use this flexibility as a genuine opportunity for learning in partnership with other stakeholders. Good stakeholder relationships and a level of trust, as well as a clearly defined process for future price adjustments, are required to avoid the danger of the provider exploiting the flexibility to ‘move the goalposts’ in, say, year 3 of a 7 years contract. Our short guide to contracting has more detail on how contractual terms can be used to safeguard this approach.
An outcome based contract may prove particularly successful and deliver more outcomes than you anticipated. This is clearly positive in terms of social outcomes. However, it introduces uncertainties and risks in terms of financial commitments. These can be limited by defining payment caps, which can be done in a few ways. The approaches described below are not necessarily mutually exclusive, and a combination might be appropriate.
Setting a cap on the total payment to providers or on the number of beneficiaries
You can consider setting a cap linked to the available budget for outcome payment. Consider the following relationship:
Total payment to provider = (Max price per outcome) * (Number of people receiving the service) * (Likelihood of success)
In this approach, you will already have established a price per outcome (as discussed in Chapter 3), so you are setting a payment cap based on a maximum number of beneficiaries for which you will make payments under the ‘best case’ likelihood of success. You need to consider that this makes it unlikely the provider would continue investing effort and resources in providing the interventions to more people , as this would be at their own cost. Some providers (or the investors providing the finance) might continue to deliver the service anyway as they value the achievement of outcomes for their own sake and can access the required extra funding. You could discuss in advance with the provider what they expect to do if the cap is reached.
In some cases the cap follows from identifying a specific cohort. For instance, the “Street Impact” London Rough Sleepers SIBs identified the target cohort as consisting of 416 named individuals identified as sleeping rough in a particular dataset.
Setting a cap on the total payable outcome per individual
You may have defined a number of different outcomes that you will pay for as and when an individual in a cohort achieves them. As described earlier, this could be because your cohort is diverse, or because you want to include an earlier payment and/or reward progression towards an end outcome. If every individual in the cohort achieves every one of the available outcomes, you will overshoot your budget. However, rather than limiting the overall payment across a cohort, you may want to consider a cap to limit the total payment on each individual. This type of cap signals to providers the need to balance doing intensive work with a single individual with their ability to engage with a higher number of individuals overall (an ‘equity’ consideration). It also helps to protect from the possibility that providers focus on individuals who are ‘easier’ to work with, and who can progress through multiple outcomes more quickly. This is especially helpful when considering that these individuals were more likely to achieve some of the outcomes even without the intervention.
Can a separate cap be set on the provider’s surplus / investor returns?
Setting a payment cap as described above allows you to limit the return that investors can expect to earn from the project. If there are no investors, the same principle applies, but the ‘return’ would be reflected as provider profit or surplus. Providers and investors can use a number of methods to determine the level of surplus or return they anticipate, such as carrying out a financial sensitivity analysis. This analysis plays out a number of scenarios based on the expected outcome success levels.
Investors may be able to take risks on a particular project where they can take a portfolio approach. Across a portfolio of projects that they support, they expect that some projects will be more successful and lead to a higher return, whilst others may be less successful and lead to a loss. If you set a payment cap that limits investor returns, the investor may also wish to discuss a ‘floor’, i.e. a minimum payment that limits the total loss they could incur.
It is worth mentioning that there is not necessarily an inherent need to cap surplus or returns, as in a well-designed payment mechanism, these will be higher when you are getting more outcomes, which is the aim.
1 min read
At the beginning of this guide, we highlighted how a fundamental purpose of using an outcomes-based contract and SIBs is to generate a shared understanding between a contracting authority and providers of ‘what good looks like’, i.e. the desired outcomes, and to allow more autonomy for the provider to use their skills in bringing about the desired outcomes.
Although defining the aspects of a payment mechanism will by necessity seem like ‘technical’ work, remember that the contract and the payment mechanism should strengthen and not weaken your relationship with the provider. It is about creating the space for flexibility and continuous improvement in tackling complex needs.
There are many useful resources that will help you price outcomes for your social impact bond or outcomes based contract.
UK Treasury green book - The Green Book is guidance issued by HM Treasury on how to appraise policies, programmes and projects. It also provides guidance on the design and use of monitoring and evaluation before, during and after implementation
Unit cost database developed by the New Economy - This unit cost database brings together more than 600 cost estimates in a single place, most of which are national costs derived from government reports and academic studies. The costs cover crime, education & skills, employment & economy, fire, health, housing and social services.
Supporting public service transformation: cost benefit analysis guidance for local partnerships - This outlines a methodology for cost benefit analysis (CBA) model. It is designed to simplify and to lower the cost of performing CBA in the context of local programmes to improve public services where analytical and research resources are limited.
7 mins read
Core project stakeholders (and their project roles) include: WLZ (service provider), the London Borough of Hammersmith and Fulham (LBHF) (co-commissioner), the Royal Borough of Kensington and Chelsea (RBKC) (co-commissioner), The National Lottery Community Fund (co-commissioner), local schools (co-commissioner), private philanthropy (co-commissioner), and Bridges Fund Management (BFM) (investor).
WLZ is an organisation that partners link workers, charities, schools and other local organisations to support children and families.
1) If a child from a disadvantaged community in West London, United Kingdom, who is at risk of negative outcomes in life (based on a combination of risk factors)…
2) … has an measurable improvement in …
3) … social and educational outcomes …
4) … as measured by activity participation, service interactions, and a defined set of outcome measurement tools…
5) … over two years…
6) … based on engagement milestones and compared to their baseline at the start…
7) … then we will pay an agreed amount based on average project delivery costs per participating child.
The project targets children aged 5-16 in disadvantaged communities in West London who are at risk of negative outcomes in life due to being ‘off-track’ in school and in their wellbeing. The project covers the northern parts of two Local Authorities in West London: the London Borough of Hammersmith and Fulham (LBHF) and the Royal Borough of Kensington and Chelsea (RBKC).
Each authority has a separate SIB contract, though the contracts share the same features.
The project targets improvements in social and educational outcomes across several areas, and the contracting authority pays out based on children showing a measurable improvement. These milestone payments were negotiated among stakeholders.
The project funds WLZ to offer a 2-year tailored programme for each child that addresses a range of needs and builds a variety of strengths and skills empowered by their link workers, who are based in their schools and work with children and families alongside multiple local partner charities for specialist support. WLZ contracts its charity partners and operates a practical shared delivery relationship between the WLZ link workers and the delivery partner session leaders on the ground. Most, but not all, of West London Zone’s work is funded through a social impact bond (often referred to as a ‘collective impact bond’).
Once the cohort is identified, WLZ link workers approach at-risk children and their parents/carers in partnership with the child’s school. The link worker builds a trusted adult relationship with these parties while co-designing the child’s individual support plan. The child’s individual support plan is developed in a co-design phase which uncovers information about the child’s strengths, interests and skills – which informs design of a phased support plan. Developmental support is conducted by WLZ link workers and WLZ delivery partners (32 as of Autumn 2018) provide specialist support to participating children.
The project was inspired by the Harlem Children’s Zone, a charitable enterprise in New York initiated to support children from ‘cradle to college’. The premise of the West London Zone intervention lies in the idea that issues relating to children living in deprived neighbourhoods are complex and cannot be solved using a single agency or intervention.
WLZ values outcomes based on expected costs and success rates for the 2-year programme. In 2015-2016, a pilot implementation study was undertaken to inform these estimates. WLZ was the first time this set of stakeholdershad worked together in this way.
The pilot was philanthropically funded and run in parallel with the development of the social impact bond financing model. The set up, delivery, and evaluation of the pilot together with the SIB development cost £580,000.
The WLZ pilot implementation study offered insights into what performance could be expected when otherwise individual services were combined in a new delivery model.
Project milestone payments in the WLZ SIB are based on:
Delivery costs per participating child incorporated the assumption that different children would require different support. This was converted into an average figure for payment value estimation.
Iterative discussions among WLZ and BFM took into account: average delivery costs per participating child, payment proportion assigned to different milestones, the likelihood of success among the cohort of achieving milestones.
The project identified eligible children in disadvantaged communities in LBHF and RBKC using risk factor analysis. Analysis steps are outlined in Table 2. WLZ expects to work with at least 700 children over the course of the SIB-funded project. Cohort identification data provides the baseline for measuring progress of children participating in WLZ services. Step 5 in the risk factor analysis (target cohort agreement) ensures that both schools and councils verify children for inclusion, which reduces the risk of perverse incentives leading to WLZ “cherry picking” children with a higher likelihood of reaching improvement milestones (and associated payments).
The risk factor analysis combines school level administrative and demographic data with interviews with school staff, and is verified using the WLZ My Voice survey. This collects data via multiple self-reported measures. This helps determine children’s emotional wellbeing, trusted adult networks, engagement with school, peer relationships, and parental relationships.
The impact of services provided – improvement in social and education outcomes – is measured via a “rate card” of progress outcomes. This approach does not involve a control group, but rather achievement of defined milestones. Payments are allocated across six “milestones”, which are listed in Table 1.
Outcome funding is divided equally across the six possible outcomes payments (Table 1). Half is allocated towards service engagement (#1, #2, #3) and half for outcome payments (#4, #5, #6).
There was a different payment milestone framework at project launch, but this was revised after a year as WLZ and commissioners determined that the original mechanism was too complex, and not all of the data could be collected in the way required to conduct the measurement.
The WLZ SIB uses historical baselines for the outcome payments #4, #5, and #6. Historical baselines use the data collected during the risk analysis / identification process described above.
Likelihood of success is based on estimates of how many children might be expected to improve by participating in WLZ’s service, based on the pilot implementation study. WLZ and BFM agreed a ‘likelihood of success’ for each outcome, which incorporated expectations that most service receipients would engage with services (milestones #1, #2, #3), but fewer would achieve end outcomes (milestones #4. #5, #6).
WLZ SIB stakeholders used a sensitivity analysis to account for different project scenarios. Table 3outlines the results of this analysis for base, high, and low likelihoods of success rates of children participating in the WLZ SIB. Figures used are examples, rather than than the actual ones used.
In Table 3, ‘sign-up’ is 100% for each scenario because commissioners agreed to outcome payments for each eligible child signed up. Total numbers of children worked with is generally limited by school budgetary constraints, rather than risk analysis or programme interest. WLZ had a target number of children to sign up and expected to be able to reach this target no matter what, as demand is sufficiently high.
This sensitivity analysis also informed the financial viability of the WLZ SIB. Different scenarios were explored to understand:
Financial modelling estimated:
1) Commissioners’ outcome payments – Calculated using (i) cohort size (i.e. number of children worked with), (ii) cohort retention (i.e. number of children still engaged after one and two years) and (iii) percentage of children in the cohort predicted to achieve each milestone. The total payment for each milestone is derived by multiplying these figures together. The overall total is derived by summing this total for each milestone.
2) Service costs – Calculated using cost of delivery associated with differing strengths and needs within the cohort
3) Contract Loss/Surplus – Calculated using predictions of: (1) commissioner outcome payments and (2) service costs. Calculations substract the service costs from commissioners’ outcome payments. Net figures (i.e. losses or surpluses) varied according to variations in cost, cohort size, and likelihood of success.
Total payments from all commissioners together (Local Authorities, schools, philanthropists, and the National Lottery Community Fund) were estimated to be between £3.5m-£4m across both LBHF and RBKC. West London Zone received £550,000 loan from Bridges Fund Management as working capital to finance the up-front work required under the contract. The re-payment of this was linked to WLZ’s success in achieving outcomes, such that WLZ was partly protected if outcome payments were lower than targeted.
West London Zone received £150,000 from the “Stepping Stones Fund”, a collaboration between UBS Optimus Foundation and the City Bridge Trust. This was to be used as partial first-loss payment for the investors, if the intervention was unsuccessful against its targets. As explained in the Commissioning Better Outcomes Fund in-depth review:
This safety net within the model meant that the investors could essentially commit to a model with a number of innovative, and untested, elements. Importantly, though, if the SIB model was successful in the first year, then WLZ could use the grant from City Bridge Trust and UBS as additional money in their service. This therefore meant WLZ was still motivated to ensure the intervention was a success and only use the money as first-loss payment if necessary.
Primary and secondary sources have been used for this case study. The secondary sources are highlighted in the text, and the primary sources are listed below.
7 mins read
The Ways to Wellness (WtW) social impact bond (SIB) launched in 2015 and is ongoing. It was the 1stSIB funded in the United Kingdom (UK) targeting health outcomes. The project will run for 7 years and end in 2022.
Core project stakeholders (and their project roles) include: Newcastle Gateshead Clinical Commissioning Group (CCG) (commissioner), Commissioning Better Outcomes Fund (commissioner), Cabinet Office Social Outcomes Fund (commissioner), Ways to Wellness (provider), Bridges Fund Management (investor) and Social Finance (who provided advice during development). The SIB was initially developed by Newcastle West CCG, which merged with Gateshead CCG and Newcastle North and East CCG into Newcastle Gateshead CCG.
Ways to Wellness (WtW), a separate legal entity, was created to coordinate implementation of the SIB.
1) If a resident with a long term health condition living in West Newcastle upon Tyne…
2) … has improvement in their self-management of…
3) … their long term health condition…
4) … as measured by well-being improvement and reduction in secondary care costs…
5) … every 6 months following admission (during 7 years of project)…
6) … compared to self-reported baselines on well-being, and secondary health care costs compared to a matched cohort of patients in Newcastle North and East…
7) … then we will pay an agreed amount per participant, based on secondary care cost reduction.
The project targets 8,500 patients aged 40 to 75 living with long term health conditions in areas of high socio-economic deprivation in West Newcastle Upon Tyne, UK.
Long term health conditions (like diabetes or some types of mental illness) disproportionately affect those facing socioeconomic difficulties. West Newcastle upon Tyne is among the 40th most deprived areas in England, with a higher-than-average receipt of sickness and disability-related benefits and 18% of residents recorded as living with a long term condition (LTC).
The project targets improvements in sense of wellbeing and reductions in use of secondary healthcare services through self-management of long-term conditions. WtW earns payments from commissioners through improved health outcomes and associated reductions in care costs for Newcastle Gateshead CCG.
The project funds a consortium of service providers to provide social prescribing for eligible patients via link workers. Social prescribing enables GPs to refer people to a range of local, non-clinical services. At the outset, project service providers included First Contact Clinical, Mental Health Concern, HealthWORKS Newcastle and Changing Lives. Health WORKS withdrew from service provision August 2017 and Changing Lives March 2018. GP practices and patients were redistributed to the remaining two providers.
WtW link workers offer support to patients by helping them to identify meaningful health and wellness goals, and providing support to help them access community and voluntary groups and resources. Social prescribing recognises that people’s health is influenced by social factors as well as clinical ones. Social prescribing aims to provide people with a variety of social activities, typically through voluntary and community sector organisations, such as volunteering, arts activities, group learning, gardening, befriending, cookery, healthy eating advice and a range of sports.
Stakeholders negotiated outcomes prices based on estimates of fiscal cost savings. Cost savings were calculated from primary and secondary health care costs, social care costs and health related benefits.
The WtW SIB was developed when there was a paucity of costing and implementation information on social prescribing. The original business case was informed by the “People Powered Health” programme, an earlier study under Diabetes Year of Care and consultations with prospective service providers.
The “People Powered Health” programme included social prescribing as an alternative approach to service provision, as well as a cohort from West Newcastle. North East Quality Observatory System (NEQOS) and Social Finance undertook further desk research to validate these findings.
Evidence informing the WtW SIB included:
WtW based outcome payments on (potential) net cost savings for patients with long term conditions, if the provision of social prescribing improved patient outcomes. Additionally, WtW conducted consultations with prospective services providers and presented expected costs of delivering a social prescribing service in West Newcastle upon Tyne. This informed refinements to cost estimates in the SIB’s business case. (This market engagement activity was conducted prior to a formal procurement process taking place – while some of the same service providers were involved, the two processes were independent and both conducted openly and fairly).
This work was used to create a business case for the SIB and social prescribing interventions predicated on cost savings.
This estimation approach does not account for the wider economic benefit, or ‘intrinsic value’ of benefits to participating people with long term conditions.
The project identified eligible patients in West Newcastle Upon Tyne, UK using eligibility criteria including:
WtW aims to offer services to over 80% of the patients who meet referral criteria. At the outset of the project, the project was expected to reach 8,500 patients in the area. The target population over 7 years (allowing for population growth and new entrants) was circa 15,000. The original population estimate was higher but was refined and reduced when it was discovered that patients with more than one condition were being double-counted.
The project measures improvement two outcome measures:
1) Improved sense of wellbeing, as measured through “Wellbeing Star”.
2) Difference in expenditure between WtW and comparison cohort.
Triangle Consulting’s “Wellbeing Star” measures improvements in self-reported wellbeing across several categories (see also). This involves patient and link worker joint-assessments every six months and pre-post evaluation of the average change for the whole cohort between the initial and most recent Wellbeing Star measurements.
For Outcome 1 (improved sense of wellbeing), The National Lottery Community Fund and Cabinet Office agreed to pay more towards this lead indicator after considering: if Ways to Wellness would deliver wider patient benefits over and above the measurable secondary care savings; the level of risk to the social investor and whether this was acceptable; the level of buy-in of the commissioner and proposed providers (whether they were on board with the contract, and how confident they were they about delivering the outcomes in the contract).
Reduced expenditure on hospital services was calculated based on average costs within the WtW cohort compared to a matched cohort of patients in Newcastle North and East.
Payments for improved sense of wellbeing began in 2015. For healthcare expenditure reductions, payments are made per patient per year. Payments for reduced healthcare expenditures began in August 2017.
The wellbeing star (4.2.1) and reduced expenditure (4.2.2) outcomes were chosen to ensure that the project was viable over the short term. Expected savings were only expected to accrue, based on improved health and wellbeing for social prescribing recipients, 1-5 years after patient admission. To make the contract viable for providers, improved sense of wellbeing was incorporated as an outcome to create earlier “lead” payments. This reduced the level of risk to the social investor and therefore the cost of working capital.
WtW receives outcome payments from commissioners, and then sub-contracts with service providers via activity- and output-based payments. Payments to service providers involved variable activity payments for the first two years. These payments combined with a fixed retainer, which was redistributed toward variable activity payments at contract renewal after 2 years. Variable payments included:
WtW being the SPV in the SIB shielded service providers from downside risk (financial losses) of not achieving the overall payment outcomes. Upside (financial benefit) was shared between the social investors, WtW and the providers. The proportions of this sharing are not disclosed.
The WtW project uses both historical baselines and matched controls to evaluate the impact of provision of social prescribing services. The historical baseline involves joint completion of the Wellbeing Star, which reduces self-reported bias risks which would negatively impact attribution. The matched controls involve differences in average health expenditures within the WtW cohort compared to a matched cohort of patients in Newcastle North and East. It may be possible to isolate a causal effect for WtW on differences in average health expenditures.
Independently of the payment mechanism, the National Institute for Health Research began conducting research which began in July 2018 and includes economic analysis. Research is expected to conclude in October 2020.
Based on publicly disclosed information, it is not clear how likelihood of success was incorporatedinto the SIB payment mechanism.
Analysis was done to develop three basic scenarios:
1. a ‘base case’ of outcomes achievement where investors get their capital back but earn no positive return
2. a case where fewer outcomes are achieved than in the comparison group and the investors lose capital.
3. a case where more outcomes are achieved than in the comparison group, and investors get their capital back, and earn a positive return or ‘upside’.
Based on publicly disclosed information, it is not possible to verify the exact link between the different scenarios for outcomes achievement and the likely levels of investor losses or returns, beyond the basic principles stated above.
The project’s investor, Bridges Fund Management, committed to provide WtW with a £1.65m investment, with £1.1m drawn down by WtW. The facility availability period has since expired and it is not expected that any further SIB investment will be needed / received.
The investment from Bridges Fund Management is 100% at-risk. Repayments and returns are linked to achievement of project outcomes. The National Lottery Community Fund / Ecorys ‘Deep Dive’ review published in 2015 makes the following statement in regards to returns:
If, and only if, base case success targets are achieved the estimated money multiple over 7 years will be c.1.38 times the initial investment. If outcomes achieved are lower than base case the multiple could be much lower and conceivably all investment could be lost.
The figure given in this statement does not account for the costs to Bridges Fund Management of their involvement in project development and their ongoing role in project management. These costs mean the eventual financial return to the capital providers will be lower than stated here.
Primary and secondary sources have been used for this case study. The secondary sources are highlighted in the text, and the primary sources are listed below.
6 mins read
Core project stakeholders (and their project roles) included: Essex County Council (commissioner), Action for Children (provider), Social Finance (intermediary), and a consortium of investors. The consortium of investors included: Big Society Capital, Bridges Fund Management, Social Ventures Fund, Charities Aid Foundation, The King Badouin Foundation, Tudor Trust, Barrow Cadbury Trust, and the Esmee Fairbairn Foundation.
1) If a young person in Essex, UK who is at risk of entering state care (based on a combination of risk characteristics)…
2) … is successfully is diverted from…
3) … entering into care…
4) … as measured by number of days out of care…
5) … over 30 months…
6) … compared to historical benchmark of children entering care…
7) … then we will pay £120 per participant per day of care averted, based on the estimated cost of having been in care.
The project targeted young people aged 11-16 in Essex, England who were at risk of entering care due to behavioural problems or family breakdown.
The project targeted improvements in social outcomes based on the reduction in days the cohort spends in care, and generated outcome-based payments by Essex County Council paying out a share of their cost savings resulting from reduced care placements for at risk young people. Care placements can cost over £200,000 per annum and research suggests it is substantially harder to address behaviours after entering care.
The Essex Edge of Care SIB operated via a special purpose vehicle- Children’s Support Services Ltd (“CSSL”). CSSL was responsible for managing the outcomes contract, the performance of the service and paying the service provider. The project intermediary, Social Finance, noted the aim of the intervention was to:
improve parenting skills of parents and carers which in turn impacts the behaviour of the adolescents so that they do not become looked after or the amount of time they spend in care is reduced.
CSSL funded Action for Children, a national children’s charity, to deliver multi-systemic therapy (MST), an intensive evidence-based family therapy. MST targets specific problems and breaks negative cycles of behaviour via the promotion of positive social behaviours. An Action for Children service delivery manager oversaw project implementation and the project was delivered by two teams of four therapists, each overseen by a supervisor with assistance from a business support officer.
Essex County Council valued outcomes based on their projected cost savings from young people diverted from care due to service provided.The outcome payment did not account for the wider benefit or ‘intrinsic value’ of improved outcomes, or improved efficiencies. This aligned with the “Prevention” approach outlined in the GO Lab “Pricing Outcomes” guide.
The impact of services provided - reduction in care placement days – was measured as the difference in aggregate days spent in care between a historical comparator group who did not receive MSTand those receiving MSTthrough the SIB. The reasons for using reduction in aggregate care placement days as measure of success were that:
Secondary outcomes that were measured, but not linked to SIB payments, included: educational engagement, offending, and personal wellbeing.
Projected cost savings in the Essex Edge of Care SIB were the difference between the comparator and intervention groups. This amounted to £120 per care day avoided.
Value calculations involved two UK government datasets about children in care: SSDA903 returns and Section 251 budget data. SSDA903 returns collate annual local authority returns and information based into categories about “looked after children” (LAC) - children who are in care. Care placement categories include residential, foster, unknown, and “other”. Section 251 budget data collates local authority statements on planned and actual expenditure for education and children’s social care.
Projected costs for intervention and comparator groups were the number of days spent in care multiplied by the unit costs of care. The number of days spent in care is published in SSDA903 returns. Gross costs were a multiple of days spent on each type of care multiplied by unit costs for care, which were estimated in three steps:
1) SSDA903 returns used to calculate the average total days per year that looked after children spent in each type of care.
2) Average annual days per year spent in care mapped against annual costs for each type of care. Annual costs information obtained from Section 251 budget data.
3) Divide annual costs for each type of care by annual days spent in each type of care. This estimate is the cost per day of each type of care and was the unit cost for outcome payments in the Essex “Edge of Care” SIB.
Once estimated, the unit costs of care placement were sense checked using outputs from the Local Authority Interactive Tool (LAIT). The LAIT provides a single central evidence base for data related to children and young people sourced from various UK government departments.
The reasons for using this payment structure, on top of measures of success, were:
This estimation approachdid not account for intrinsic value of children being out of care, or benefits associated with not entering care (e.g. positive socioeconomic spillovers). It is also not clear if the payment mechanism accounted for additional administrative costs. A 2016 review of the initial 3 years of implementation outlines some of these administrative activities, incurred due to using a SIB compared normal delivery of a MST programme. These included governance, performance management, and payment by results processes. The SIB required additional effort on data management and payments, by all parties, due to the complex and intensive nature of the focus on results.
The project identified service recipients via a single source referral process – referrals were taken from Essex County Council Children’s Social Care ‘quadrant resource panel’ and assumed the young person was already a child in need or subject to a child protection plan.
Young people were tracked for 30 months after the start of the MST course (which lasts 4-5 months), and their outcomes were measured quarterly.
Outcomes were measured quarterly and paid by Essex County Council to CSSL (the special purpose vehicle). This regular payment allowed investors to recycle their capital and reinvest in ongoing MST provision.
Payments were entirely based on a single outcome and reflect the reduction in costs of care to Essex County Council.
The SIB used a historical baseline to evaluate the effect of the provision of MST, within the SIB, on reduced care admissions. The historical baseline for each year used information from cases not receiving MST from the previous 3 years regarding: project eligibility, numbers entering care, numbers not entering care, and individual’s average length of stay in care. Estimations for the outcomes for the counterfactual of participating in the MST include:
There was no concurrent comparable control group and it is not possible to isolate the causal effect of the intervention. It is difficult to compare the performance of the Essex MST service to others due to different project scales and contexts.
Likelihood of success was based on the expected number of children that could benefit from MST each year. This data was collected through a six-month feasibility study and historical government datasets, Social Finance led data cleaning and analysis.
Based on an overall referral target over five years of 380 families, “medium level performance” was set at 110 young people being diverted from care.
Edge of care volume calculations assumed that 70% of individual children per year could be eligible for interventions. This was due to ineligible cases related to autism and a lack of parental engagement. Of this group, there was a 65% assumed likelihood of entering care in the following 12 months based on experience and case file analysis.
The Essex MST SIB tied outcome payments to an average of averted unit costs, and did not use future performance projections to account for the likelihood of different outcome scenarios.
Outcomes payments were capped at £7.2 million. Outcome payments from the fund varied based on cost savings achieved.
Primary and secondary sources have been used for this case study. The secondary sources are highlighted in the text, and the primary sources are listed below.
8 mins read
The Educate Girls development impact bond (DIB) launched in 2015. It was the 2ndDIB (worldwide) and focused on improving education outcomes for children in Rajasthan, India. The project ended in 2018. Project outcomes were met and core project stakeholders deemed it a successful pilot.
Core project stakeholders (and their project roles) included: Children’s Investment Fund Foundation (CIFF) (commissioner), Educate Girls (provider), UBS Optimus Foundation (UBS OF) (investor), Instiglio (intermediary), and IDinsight (evaluator).
1) If girls and boys in Rajasthan, India at risk of not achieving potential levels of enrolment and education outcomes…
2) … achieve an improvement in…
3) … school enrolment and education outcomes …
4) … as measured by enrolment rates and standardised testing…
5) … over 3 years…
6) … compared to historical baselines of enrolment rates and the education outcomes of children in a matched control cohort of villages...
7) … then outcome funders would pay $367,000 for re-enrolment of 79% of eligible students and improved aggregate scores on ASER tests of 5,592 points across the whole cohort, with outcome payments capped at $422,000.
The project targeted out-of-school girls aged 7-14 and girls and boys in grades 3-5 in the state of Rajasthan in India.
The project focused on children living in the Bijoliya, Mandalgarh and Jahajpur blocks in the Bhilwara district of Rajasthan. The DIB funded interventions for children attending 166 government schools across 141 villages. These schools were randomly selected from a sample of 332 schools in 282 villages.
The project targeted improved education outcomes via out-of-school girls (re)enrolled in school and education quality, as measured by test scores for girls and boys. It generated outcome-based payments by paying out based on cohort level payments assigned by commissioners to evaluated outcome metrics.
The project funded interventions by Educate Girls, an NGO operating in Rajasthan, India. Educate Girls uses an integrated community-based approach to education that involves increasing access to education for primary school-aged children in rural areas, especially girls. In rural parts of Rajasthan, girls are out of school at twice the rate of boys and only 50% of women can read or write.
Educate Girls has history of success with education enrolment and quality improvement via community and cultural engagement. In 2015 the DIB contract launched to increase the scale of their work. Educate Girls’ strong community ties allow them to positively communicate the value of education within rural communities. Educate Girls account for cultural context and accordingly adapts its education provision approach. They create tailored teaching programmes to adapt user needs and improve the quality of their education.
A DIB-funded approach allowed the flexibility and support required to address difficult education challenges in Rajasthan. These challenges included one in ten girls aged 11-14 not being enrolled in school and less than a quarter of rural children in Grade 3 being able to read at a Grade 2 level or solve a subtraction problem.
The Educate Girls DIB involved negotiated outcome prices based on estimates of expected project costs and project performance. Expectations were based on data from a randomised controlled trial (RCT) and additional baseline collection. RCTs involve one group of individuals receiving an intervention/service and their outcomes are compared to a control group of individuals, who share the same characteristics as the intervention group, but did not receive the additional intervention/service. This aligned with the “Efficiency” approach outlined in the “Pricing Outcomes ” guide.
Price per unit outcome did not inform the overall outcome payments – rather, outcome payment was determined by improvement across the whole cohort. Individual price per outcome can be calculated from overall outcome payments and allocations.
Projected performance was based on data from a RCT undertaken by Educate Girls in the Jalore district of Rajasthan. The RCT was designed pro-bono by University of Michigan faculty. This information outlined the effects of Educate Girls’ activities on enrolment, retention, and learning outcomes (see also). Instiglio, the intermediary, combined performance data from Educate Girls and target area characteristics to estimate enrolment and learning outcomes. The project gathered baseline data from a census-like door-to-door survey.
The value of project payments was decided via CIFF and UBS OF negotiating based on expected service costs and UBS OF receiving an internal rate of return (IRR) of 10%. An IRR reflects the rate of return to investors that accounts for the risk which they absorb with their investment and how much they are paid for holding this risk. It is possible to calculate unit values for the outcomes, however this is via working backwards from overall outcomes payments and costs.
UBS OF provided US$270,000 as investment based on being the equivalent value of expected service provision costs (17,332,967 Indian Rupees). An IRR of 10% on an investment of US$270,000 corresponded to an expected outcome payment was US$367,000, 87% of CIFF’s total available outcome funding, if the targets were met (79% enrolment and +5,592 points on ASER scores). If performance exceeded these targets, CIFF could draw from a total outcome funding of US$422,000 available, which would have enabled UBS OF to earn an IRR of 15%.
According to Alison Bukhari of Educate Girls, the estimated service delivery cost took into account some of Educate Girls’ standard administrative costs to operate the programme, but did not include any administrative or transaction costs for designing or managing the DIB. A 2016 Devex piece references that if the DIB had been bigger, the overhead cost would have been a lower proportion of the project’s total costs. However, further information on what was incorporated in overhead costs is not available.
The Educate Girls DIB was set up by stakeholders as a pilot development impact bond. Its payment approach has limitations - the payment mechanism does not set a clear financial value for individual education outcomes and is based on the cost of delivering the intervention. While a value can be extrapolated, it does not necessarily reflect an outcome preference or valuation.
The project focused on children living in the Bijoliya, Mandalgarh and Jahajpur blocks in the Bhilwara district of Rajasthan. The project targeted two different, but overlapping, outcome populations. One outcome (enrolment) focused on out-of-school girls aged 7-14 and the other (learning) focused on girls and boys in grades 3-5.
The DIB funded interventions for children attending 166 government schools across 141 villages, matched to a cohort of equal number. These schools were randomly selected from a sample of 332 schools in 282 villages, from a random sample of 332 schools from an eligible population of 396 schools.
IDinsight, the independent evaluator, divided the 332 sampled schools into treatment and control groups. IDinsight used pairwise matching to balance the characteristics of treatment and control groups. Pairwise matching involves assigning villages to “pairs” based on their characteristics and randomly assigning one to the treatment group and one to the control group. Pairwise matching characteristics included:
Outcome payments were allocated to “buckets” for education enrolment and quality. Pre-allocated payments only included those for minimum target outcomes, i.e. US$367,000 of the US$422,000 outcomes budget. 20% was allocated for enrolment rates and 80% for education quality. CIFF pushed for a focus on improved learning due to an identified disparity between the education outcomes of boys and girls in Rajasthan. The project measured success by improvement in enrolment and education quality based on:
Enrolment was measured by identifying the percentage of out-of-school girls in target villages based on door-to-door surveys. This involved a pre-post evaluation of enrolment rates. IDinsight, the independent evaluator, verified enrolments by visiting schools and cross-checking school registers against interview data from principals, teachers, and parents.
Education quality was measured using standardised testing for literary and maths. The project used the Annual Status of Education Report (ASER) test, which measures proficiencies in Hindi, English and mathematics. The ASER test is a widely used and accept method to provide rigorous assessment of the education outcomes of social sector programmes.
Measurement involved annually assessing a panel of students using ASER tests over the three-year evaluation. The impact was calculated by aggregating the differences between cohorts’ baseline and final learning levels for the intervention group and comparing this to the aggregate change in test scores in the matched control cohort. Using aggregate, rather than average, scores linked the quality education payments to improved enrolment in effective schooling.
Figure 10.1 displays outcome progress over the course of the project (2015-2018).
The payment was made in one lump sum in July 2018, following the verification of successful project results.
Outcome funding was divided between lump sum payment for investors and an additional payment to Educate Girls to incentivise reaching programme milestones. UBS OF negotiated with Educate Girls to pay the incentive payment at a rate 32% of interest payments which the Foundation received if outcomes were met, up to an IRR of 15%.
The payment structure rationale was to pass 'upside' financial benefit risk to providers and incentivise performance without passing on downside risk (financial loss). This upside benefit was in addition to non-financial risks, such as reputational risks, that Educate Girls was exposed to in the DIB. Payments were set up as as a single pay out to the investor at the end of the project.
The Educate Girls DIB used matched controls and a historical baselines to evaluate the effect of the service provision on education outcomes.
Assessments for learning outcomes compared the intervention and matched control groups of schools. Evaluations were made at baseline in September 2015 and in February of each subsequent year (2016, 2017, 2018). In total, IDinsight conducted over 25,000 assessments across more than 11,000 students. For the intervention group, which was compared to a historical baseline, IDinsight enumerators followed up with students who were absent from school, such that the attrition rate over the three-year evaluation was below 4%. Each August, IDinsight validated additions to the out-of-school census.
Due to census costs, IDinsight did not estimate enrolment in control villages. The Instiglio DIB design memo notes several potential factors influencing the enrolment of out of school girls. A causal effect cannot be measured for the effect of Educate Girls’ program on enrolments, due to the a lack of comparator group.
IDinsight incorporated several measures to mitigate bias and ensure results were robust. To mitigate bias IDinsight operated independently from the Educate Girls program and its field staff. IDinsight project enumerators were sent to a variety of schools and were not informed of the village’s assignment to treatment or control. Their data were also collected digitally and reviewed daily to evaluate if there were any missteps in collection. The risk of skew, due to gaming or cheating, in the ASER tests was assumed to be low. This is due to the difficulty in “faking” ability in the language and mathematical reasoning tests. The size of the study (roughly 12,000 participants) also supported the statistical robustness of estimates.
It is not clear how likelihood of success was incorporated into planning of the DIB payment mechanism.
Outcome calculations focused on the aggregate difference between treatment and control groups. IDinsight calculated the minimum detectable effect size base on a sample of 332 schools in 282 villages. This showed that if true treatment effect on ASER scores was 0.47 points, then the evaluation would have a 20% chance of failing to distinguish the treatment effect this – giving a false negative. The observed difference in learning gains was 1.08 ASER learning levels, a difference that statistically significant at the 1% level. This implies that the probability of observing this difference, if there is actually no treatment effect, is less than 1%.
Statistical uncertainty was not incorporated into outcome payment calculations. Example potential payment scenarios are outlined in Instiglio’s design memo, but these were updated following implementation of the EG DIB.
The maximum total outcome payment was US$422,000. The minimum outcome payment of US$0 was tied to the project having no impact at all.
https://golab.bsg.ox.ac.uk/knowledge/case-studies/educate-girls/offer further analysis, conclusions, and recommendations about the Educate Girls DIB.
The use of an outcomes-based payment mechanism created several positive spillovers. The mechanism created incentives for Educate Girls to more rigorously evaluate the data collected and analysed by IDinsight. Subgroup analyses, available due to the breadth of information collected about study participants, allowed Educate Girls to understand how their services were impacting individuals differently within the intervention group. This supported Educate Girls’ substantial adaptation and expansion of activities in project year 3.
Learnings from the Educate Girls DIB are directly feeding directly into the Quality Education India DIB, an ambitious US$11 million project coordinating service provision across three NGOs.
1. Government Outcomes Lab. Case studies: Educate Girls [Internet]. 2018. Available from: https://golab.bsg.ox.ac.uk/knowledge/case-studies/educate-girls/
2. Instiglio. Design Memo: Educate Girls Development Impact Bond [Internet]. 2015. Available from: http://instiglio.org/educategirlsdib/wp-content/uploads/2016/03/EG-DIB-Design-1.pdf