Pivot! New opportunities in culturally-responsive evaluation

by Sarah Hellesen

Even in years not dominated by a global pandemic, our plans do not always go the way we’d like.

In tobacco control, that might look like a key decision maker opposing a policy we’d hoped to pass, or a survey failing to yield helpful data. Sometimes the plans that need to change are our literal evaluation plans. And sometimes the events we look forward to attending all year pivot completely.

Along with my colleagues, I attended the first virtual American Evaluation Association annual meeting, Eval20, earlier this fall. While the experience didn’t quite capture what we love about these conferences, I did walk away with some greater insights into the field of culturally-responsive evaluation.

As you might expect, the COVID-19 pandemic was a significant theme throughout the conference. A key session that I attended was the virtual panel Pivoting, Truth-Telling, and Healing: Shining the Light on Evaluation Challenges and Opportunities During and Beyond a Global Pandemic. The panel included the speakers Dr. Tracy Hilliard, Director of the Center for Culturally Responsive Engagement, Howard Walters of the W.K Kellogg Foundation, and Nivedita Ranade of Creative Research Solutions.

Zoom meeting
Panelists at the virtual Eval20 event

The panelists in this session discussed how their culturally-responsive evaluation practice has changed in light of COVID-19, acknowledging that even as the virus spread across the globe, it has not impacted all groups equally. Populations that already face economic and health disparities in the United States have suffered larger numbers of COVID morbidity and mortality in their communities. The evaluators on the panel pointed out the importance of acknowledging this intersectionality in the pandemic’s toll, and discussed how they have sought to look at data differently as a result.

“It’s lifted up the importance of not just quantitative but also qualitative data, and listening to community voice,” said panelist Howard Walters.

Dr. Tracy Hilliard added that this year has presented a “unique opportunity for engaging community members with the intention that their voices would directly influence how data is being used.” Because COVID-19 is a new virus, the scientific and medical community couldn't rely entirely on existing knowledge. The lived experiences of patients and affected communities became valuable sources of information for how to treat, and prevent, the spread of the coronavirus. 

This led to a discussion of what the panelists termed “practice-based evidence,” as opposed to evidence-based practice.

Dr. Hilliard asked, “Where has the evidence come from, historically? What are the cultural norms that we’ve relied on?” The answer, of course, is that the evidence we rely on in evaluation and research in the United States remains largely shaped by white cultural norms. Evaluation data, like any data, is not free from bias.

Practice-based evidence necessitates “leaning in to the experiences of those impacted by a particular situation.” Evaluators use a body of evidence from the community and develop the evaluation with them, versus imposing externally developed practices upon them.

This resonated with me, as it is very similar to TCEC’s simplified definition of cultural humility: doing something with people rather than doing something for people.

Cultural humility

I think it’s important to remember that the act of evaluation is, at its core, a dynamic process. It is about learning what works and doesn’t work.  The pandemic may have made some of the inequities in our society more visible, but it’s necessary for us as evaluators and public health workers to remember that they were there long before the virus. We are overdue for better approaches to cultural responsiveness. As Dr. Hilliard remarked, “We all have much more to learn.”

For our part, the Tobacco Control Evaluation Center continues to develop and refine our understanding of cultural humility. COVID-related challenges have opened up a window of time for changing how we do our evaluation work-- why not use this opportunity to make it more inclusive and uplift community voices?