Abstract
Program evaluations that lack experimental design often fail to produce evidence of impact because there is no available control group. Theory-based evaluations can generate evidence of a program's causal effects if evaluators collect evidence along the theorized causal chain and identify possible competing causes. However, few methods are available for assessing competing causes in the program environment. Effect Modifier Assessment (EMA) is a method previously used in smaller-scale studies to assess possible competing causes of observed changes following an intervention. In our case study of a university gender equity intervention, EMA generated useful evidence of competing causes to augment program evaluation. Top-down administrative culture, poor experiences with hiring and promotion, and workload were identified as impeding forces that might have reduced program benefits. The EMA addresses a methodological gap in theory-based evaluation and might be useful in a variety of program settings.
Original language | English |
---|---|
Journal | American Journal of Evaluation |
ISSN | 1098-2140 |
DOIs | |
Publication status | Accepted/In press - 2024 |
Keywords
- Case studies
- Higher education
- Impact evaluation
- Qualitative methods
- Theory-based evaluation