Evaluation

This playlist features sessions about different approaches, techniques, analyses that can be utilized in the evaluation of programs and interventions.

 

Assessing Fidelity to Evidence-Based Programs/Practices

Aired on June 12, 2018 Dixie King, Ph.D. focuses on three common ways to measure fidelity to evidence-based programs: through structure, content, and delivery. The use of “evidence-based programs” has become a common requirement in applications for grant funding. Replicating an evidence-based program “with fidelity” can be generally defined as “replicating a program in a way that research shows to be effective.” This presents unique challenges both to the program staff and to the program evaluator: the former may be struggling to replicate the program under adverse conditions or in the face of policy prohibitions, while the evaluator must determine what constitutes “fidelity,” and what compromises it beyond repair. This session focuses on three common ways to measure fidelity: through structure, content, and delivery. It also explores how fidelity to evidence-based practice can become a key to program sustainability.

 

Participatory Action Research

Aired on June 13, 2018 Amanda Aykanian, M.A., and Dixie King, Ph.D. will explore PAR as a methodology for research and evaluation that engages marginalized groups as co-researchers. Participatory action research (PAR) is a methodology for research and evaluation that engages members of marginalized and oppressed groups as co-researchers. Grounded in action and participatory research, PAR is change-oriented and values the lived experience of people upon which research is traditionally done. This interactive session will define PAR, highlighting how it differs from other forms of research, and introduce considerations for implementing a PAR study. The pros and cons of PAR and suggestions for grantees interested in pursuing this approach to evaluation will also be explored. Examples of PAR in behavioral health settings will be provided to illustrate key points.

 

The Basics of Impact Evaluation

Aired on June 12, 2018 Madeleine Wallace, Ph.D., will present on how to measure the impact made by programs. Well-constructed and well-implemented programs can have an impact far beyond the program itself, and have implications for entire communities, as well as for local, state, and federal policy. How to measure this impact is the challenge. This session will focus on projects in which the funder, evaluator and program staff partnered to expand the scope of an evaluation, and with what results

 

Numbers and the Stories Behind Them

Aired on May 30, 2018 Presenter: Dixie King, Ph.D., will explore the various ways in which quantitative and qualitative approaches can enhance the value of an evaluation. Though almost everyone agrees that both are important to evaluation, few evaluators are trained in how to effectively integrate qualitative and quantitative data to enhance the analysis of process and outcome data. While there is (with just cause) a bias toward the use of statistical over qualitative data collection in program evaluation, failing to integrate qualitative and quantitative methodologies can have a fundamentally negative impact on the comprehensiveness and usefulness of an evaluation. This session will explore the various ways in which qualitative and quantitative approaches can appreciably enhance the value of an evaluation

 

Copyright © 2024 Prevention Technology Transfer Center (PTTC) Network
map-markermagnifiercrossmenuchevron-down