Evaluation

All our interventions are evaluated

 

To create new knowledge about possible solutions to the welfare state’s central challenges, we need to know whether the new interventions we develop on the basis of our studies make a positive and lasting difference in a cost-effective way.

 

If, based on our studies, we decide to test the intervention in practice, we ensure that the end result is evaluated for effectiveness according to the highest scientific standards, often through a randomized controlled trial, or in a mixed methods design. All evaluations are carried out in close collaboration with external, independent researchers. Analysis plans and subsequent results are always published.

Billede

Why do we use randomised controlled trials?

A randomised controlled trial is an impact assessment method whereby those people who are to be offered a new initiative are chosen randomly through the drawing of lots. Those who are not selected to receive the offer then function as a control group, which means that they continue to receive the standard input that everyone was receiving to begin with.

 

Let us look at an example: We have developed the NExTWORK programme – a company-centric initiative for young people – and want to know whether the initiative is more successful in finding jobs or education courses for the young people than the existing municipal programmes they would otherwise have been offered.

 

If we allowed the young people themselves to choose whether they would like to follow NExTWORK or the usual programme at the job centre, there is a chance that only the most motivated young people would sign up for NExTWORK. When we then compared the two groups, more young people from the NExtTWORK group would possibly have found a job or started a course of education than those who followed the standard programme. However, this would not necessarily have been on account of the NExTWORK initiative; these young people were simply more motivated to begin a course of education or a job to start with. This is why it is necessary to draw lots to decide who is offered which programme.

Randomisation and numerous participants

Randomisation ensures that the compositions of the two groups are as similar as possible, as long as the groups are big enough. That is why the assessment has to include a large number of young people if we are to be able to draw well-founded conclusions about the impact of the initiative.

 

Randomisation and numerous participants are thus the two key ingredients in a randomised controlled trial. They ensure that when we subsequently compare the NExTWORK group with the control group, we can be confident that a difference between the two groups is attributable to NExTWORK because the groups are otherwise identical on average.

Billede

Is it worth it?

If we discover that more of the young people from the NExTWORK group have found work or started an education than their counterparts in the control group, we have to compare this improvement with the amount of extra resources the NExTWORK programme requires in relation to the existing approach at the job centres, in order to evaluate whether the initiative is cost-effective.

 

As such, a randomised controlled trial does more than simply inform us whether the initiative is effective; it also shows how large the impact is in relation to the existing initiative(s) in the area. When we compare this change with the relative resource consumption of the new initiative, we can establish whether the initiative generates positive change in a cost-efficient manner and therefore has the potential to be of benefit to society.

Qualitative insights

When we perform an impact assessment, the quantitative results from a randomised controlled trial are not sufficient in and of themselves. These results need to be linked to qualitative insights from the implementation phase itself; what appears to work, how, when and under what circumstances? Observations and interview data supplement and finely differentiate the quantitative results, which is why we back our impact assessments with implementation evaluations.

Billede

Credible evaluations

 

It is crucial to us that the results of our evaluations be credible. That is why we commission external researchers to lead the evaluations and why we always make sure that the results are published. In order to ensure transparency in the work with data, we prepare pre-analysis plans that are published before we have access to the final data. A pre-analysis plan is a research article used to describe in detail how the data will be analysed, and which findings will be presented in the final evaluation report. In this way, we can assure the general public that all findings – both positive and negative – will be presented, and eliminate any suspicion of having cherry-picked the results.

Latest evaluations

Evaluation of TipsByText

See evaluation

Evaluation of Perspekt 2.0

See evaluation