A Protocol to Assess Contextual Factors During Program Impact Evaluation: A Case Study of a STEM Gender Equity Intervention in Higher Education

Suzanne Nobrega*, Kasper Edwards, Mazen El Ghaziri, Lauren Giacobbe, Serena Rice, Laura Punnett

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

Program evaluations that lack experimental design often fail to produce evidence of impact because there is no available control group. Theory-based evaluations can generate evidence of a program's causal effects if evaluators collect evidence along the theorized causal chain and identify possible competing causes. However, few methods are available for assessing competing causes in the program environment. Effect Modifier Assessment (EMA) is a method previously used in smaller-scale studies to assess possible competing causes of observed changes following an intervention. In our case study of a university gender equity intervention, EMA generated useful evidence of competing causes to augment program evaluation. Top-down administrative culture, poor experiences with hiring and promotion, and workload were identified as impeding forces that might have reduced program benefits. The EMA addresses a methodological gap in theory-based evaluation and might be useful in a variety of program settings.

Original languageEnglish
JournalAmerican Journal of Evaluation
ISSN1098-2140
DOIs
Publication statusAccepted/In press - 2024

Keywords

  • Case studies
  • Higher education
  • Impact evaluation
  • Qualitative methods
  • Theory-based evaluation

Fingerprint

Dive into the research topics of 'A Protocol to Assess Contextual Factors During Program Impact Evaluation: A Case Study of a STEM Gender Equity Intervention in Higher Education'. Together they form a unique fingerprint.

Cite this