What Do You Read to Lead?
Rethinking Evaluation
I had opportunity to speak with Elena Harman of Vantage Evaluation about her recent book: The Great Nonprofit Evaluation Reboot: A New Approach Every Staff Member Can Understand. It made me rethink how nonprofits approach evaluation.
Elena’s premise is that nonprofits aren’t tapping into the potential that evaluation provides for their programs, strategies, and ongoing learning.
We tend to think of evaluation as something imposed upon nonprofits by others: to demonstrate impact to funders, to prove dollars weren’t wasted, to meet government contract requirements. Most nonprofits assume a defensive posture, keeping an arm’s length distance from evaluators, hoping the results are consistent with whatever messaging they’ve adopted about outcomes.
Elena suggests that nonprofits flip the narrative and make evaluation work for them.
Why can’t evaluation be a learning tool that yes, demonstrates impact, but also provides feedback for programs and enables nonprofits to improve based on learned and lived experience? Evaluation that is integrated into day-to-day work at nonprofits could help leaders quickly evaluate if new strategies are on track or what components of strategies are most critical for overall success.
One key takeaway is that nonprofits are spending too much time collecting data and instead should focus on planning and reflection. One solution is to think differently about the ways nonprofits use all the myriad sources of information they collect on their programs.
Most nonprofits are already gathering program and client feedback, even if not for a formal evaluation. What if all the sources of data and information were reviewed at biannual reflection sessions where key strategic questions were raised about programs and clients? What if more organizations asked themselves: What evidence do we have about what’s working in our programs, and what do we know about how we need to improve?
This type of ongoing reflection on all types of information (surveys, anecdotes, program statistics) could provide the framework for staff to accelerate their thinking, in real time, about program design and delivery. It could also help demystify why nonprofits gather this information anyway and help board members connect with the organization’s rationale more immediately.
One of the examples Elena shared with me was Colorado Covering Kids and Families (CCKF) which used evaluation as a learning tool for their conference. Rather than the standard evaluation tool at the end of the conference or session, CCKF integrated evaluation before the conference (including during the planning stage) as well as after individual sessions and at the end of the whole conference. They involved a larger team of stakeholders to generate a wider range of questions to be answered from the evaluation work. Finally, they engaged in a learning conversation immediately after the conference with all stakeholders while the information was fresh in their minds to identify lessons learned and next steps.
Ever-efficient with their resources, nonprofits need to rethink their mindset regarding evaluation. Using evaluation as a learning tool and not just “something we have to do” is key. Savvy nonprofits will shift how they talk about and approach evaluation, using it as a dynamic springboard for improvement.
Comment section