We note that this rather critical assessment applies to many but not all of the studies we identified. In order to advance this area of public health evidence generation, we now consider some potential ways forward by proposing a framework for qualitative process evaluations from a complex systems perspective. The first phase is intended to produce a static system description at an early time point. This is then followed by a second phase focused on analyzing how that system undergoes change.
- This is helpful in enabling optimal targeting of particular interventions, and highlighting which aspects might need to be changed for other groups.
- Once you’ve finished your process evaluation plan, you are ready to move on to Step 8, in which you’ll plan your outcome evaluation to examine whether you are achieving the changes you seek among individuals receiving your program.
- In studies where this interference is deemed problematic, researchers can opt for a more distant relationship between the intervention and evaluation teams, such as that occurring in the study of hypertension treatment in South Africa (case study 1.2).
- Contextual factors may also affect how the target audience receive and react to the intervention thereby influencing hypotheses of causal mechanisms which are generated with consideration as to how contextual factors might strengthen or weaken the intervention, and thereby affect outcomes.
- Where studies gave rise to more than one publication, we considered them “linked” and extracted data from across the identified studies.
The methods used to process, input and analyse the clinical logs collected from the main trial sites were identical to the methods used for the clinical logs collected in the case study phase (see Chapter 4), with two additions. Once you’ve finished your process evaluation plan, you are ready to move on to Step 8, in which you’ll plan your outcome evaluation to examine whether you are achieving the changes you seek systematic testing among individuals receiving your program. Evaluation is a systematic process designed to help you better understand – and improve – your participants’ experience of your outreach activities. It involves collecting information (for example feedback, observations or quiz results) and reflecting on what worked well, what could be improved, and what changed for the people involved as a result of their participation.
Developing guidance for process evaluations of complex public health interventions
Qualitative interviews were held with clinical leaders in the intervention and supported implementation trial arms before the intervention phase began. Health professionals in usual care were also interviewed to facilitate comparison in terms of continence management across trial arms. In order to minimise any change in practice, interviews in this trial arm were conducted at the end of the data collection period. The process evaluation in Tanzania and Canada contributed to the formative stage of the intervention by identifying discrepancies between text messages created by researchers and those preferred by recipients, thereby enabling a change in the study design prior to commencement. The objective of this paper is therefore to describe the different process evaluation approaches used in the first round of GACD projects related to hypertension, and to document the findings and lessons learned in various global settings.
Most of the interventions (five) were tested in randomized controlled trials with one stepped wedged trial and one pre-post study design. A process evaluation is quite different to an outcome evaluation, but undertaken at the same time. It is concerned with how an intervention is working, examines if it is being delivered as planned, and assesses strengths and weaknesses in content and delivery.
What makes process evaluation different
Most process evaluations use qualitative methods, many of which require significant amounts of time in the field. These include observation of community meetings or project activities, long open-ended interviews, focus groups, and social network analysis. So, while many organizations are already using process evaluation, there is still room for the approaches to be systematized with larger budget and time commitments and with more specialized researchers. Our study has demonstrated the need to consider process evaluation early in the research cycle so as to optimize design and data collection throughout the implementation cycle. When done early in the project cycle, process evaluations can help to optimize implementation of the intervention, as was done in Kenya through repeated training for health providers delivering the intervention.
Evaluating the “input” (the very first column in a logic model) is just as valid as evaluating the last columns about outcomes. It is called a “logic” model after all — and logically there is a chain of cause and effect which means if we have the right resources at the very beginning of the chain (inputs) then we assume we will be able to get to the outcomes to which we aspire. Impact or outcome evaluations are undertaken when it is important to know whether and how well the objectives of a project or program were met. Process evaluation is also helpful when a program fails to achieve its goals for some or all of the target population. Process evaluation helps reveal whether this was because of a failure of implementation, a design flaw in the program, or because of some external barrier in the operating environment or a combination of these and other factors. The process evaluation in India is still underway, but the process evaluation of the training of Accredited Social Health Activists (ASHAs) demonstrated the intervention was successfully implemented by the ASHA which improved skills, knowledge and motivation among the ASHAs [21, 22].
The information was summarized and reported descriptively and narratively in relation to the themes above. Overarching issues were identified by the working group that had been established to oversee the project. The working group comprised researchers who had all been involved in the different process evaluations and helped to draw out the main implications of the process evaluation with respect to project and policy, as well as lessons to inform future process evaluations. Organizations need to conduct process and outcome evaluations to model program successes and learn from program failures.
Systems thinking prompts researchers and practitioners to consider the boundaries of the system they are studying or in which they are working [19] and places an emphasis on the interactions and relationships between system elements and the system with its broader environment [1,6]. Further applying concepts from complexity science prompts a consideration of how those interactions create nonlinear chains of cause and effect, are unpredictable, unfold overtime, and give rise to system-level emergent outcomes [20]. The ToC approach has been successfully used to design, implement and evaluate complex community initiatives, and more recently has been applied to complex health interventions, including at LSHTM.
The authors declare that the funders did not have a role in the design of the studies and collection, analysis, and interpretation of data and in writing the manuscript. The first step in the process is to define exactly what you want to find out from the evaluation. This may depend on who the evaluation is for – is it just for your own purposes or is there information you need to collect and feed back to stakeholders or funders?
Over-emphasizing outcome evaluation at the cost of other types, especially process evaluation, is a disservice to nonprofits and the sector. Because process evaluation allows a nonprofit to look at how it develops itself, its structures, its supporting programs like communications and marketing, and even fund development to get to the outcomes everyone wants it to achieve. While program evaluation can seem like a daunting task at times, evaluations are key to successful program learning and improvement for any organization.