California Tobacco Control Evaluation Guide

Table of Contents

A hand holding a lantern

INTRODUCTION

This guide is a resource for conducting evaluations of tobacco control programs led by local health departments, community organizations, and statewide and regional partners funded by the California Tobacco Control Program (CTCP). To address the needs and questions expressed at listening sessions held with evaluators, funded projects, and program consultants at CTCP, this guide and related resources are designed to:

  • Describe how evaluation works within tobacco control
  • Explain how evaluation supports (and fits into) project efforts
  • Correct common misconceptions about using evaluation in advocacy campaigns
  • Set expectations for evaluation utility, efficacy, and rigor
  • Serve as the primary reference on evaluation for tobacco control program evaluators, project staff and subcontractors, CTCP, and the Tobacco Control Evaluation Center (TCEC) 

It is important to note that the Evaluation and Surveillance Section at CTCP oversees statewide tobacco control evaluation and surveillance systems which guide state-level priorities and informs individual projects. This guide focuses on local tobacco control evaluation in California conducted by CTCP-funded projects in counties, cities, regions, and specific populations throughout California.

WHO THIS GUIDE IS FOR

Within tobacco control, a wide spectrum of people may be involved in a project’s evaluation work: evaluators, epidemiologists, project directors/coordinators, health educators, community engagement coordinators, media specialists, coalition members, community members, as well as program consultants, research scientists, and procurement managers at CTCP. 

Some may be new to evaluation and its uses; others may just be new to tobacco control or want a refresher on best practices. Whatever level of understanding readers may have, this guide aims to be instructive to everyone, providing introductory explanations as well as addressing how to get more utility from evaluation work.

HOW TO USE THIS GUIDE

This guide can be read in its entirety or used to find specific answers. The links in the table of contents allow users to jump to different sections based on immediate needs. 

Evaluation novices will find the first few sections useful in providing a foundational understanding of what evaluation is and does, while evaluators new to tobacco control will benefit from the section that explains how evaluation is used to further tobacco control efforts and the requirements and expectations for doing so. 

From there, the guide covers each phase of tobacco control evaluation work in a chronological order starting with evaluation planning. A large part of the content deals with the various evaluation activities used in tobacco control work, explaining what they consist of, how they are commonly used by projects, and how to ensure they produce the most benefit. Each section also includes links to resources on the Tobacco Control Evaluation Center (TCEC) website that cover the specifics of how to plan and conduct evaluation activities and analyze evaluation data. The TCEC website also contains instruction, templates, and sample data collection instruments for doing and using evaluation. 

Evaluation teaches that strategies can always be improved, and that includes this guide. Send any suggestions for this resource to the Tobacco Control Evaluation Center at tcecta@phmail.ucdavis.edu

TCEC hopes the whole evaluation team (evaluators, epidemiologists, project directors/coordinators, health educators, community engagement coordinators, media specialists, coalition members, community members as well as program consultants, procurement managers, and research scientists at CTCP) will use this guide to better support local health departments, community organizations, and statewide and regional partners funded by CTCP to end commercial tobacco in all communities in California.


EVALUATION OVERVIEW

WHERE TO START

This section of the guide is designed for those who may not be familiar with evaluation. It describes what is meant by evaluation, why evaluation is used in tobacco control work, and the who, when and where of the evaluation life cycle. 

WHAT IS EVALUATION

Evaluation is a process of systematically collecting and assessing information in order to make informed decisions. Evaluations should be designed to produce actionable findings and consider the utility for intended audiences at every step.

Evaluation is driven by data, either data collected by the project directly from information sources such as interviews, surveys, observations, or data obtained from existing sources such as municipal code, meeting minutes, website analytics, social media metrics, US census, vital records, and statewide tobacco surveillance data sources

For evaluation results to be useful, careful attention must be given to asking the right questions, adhering to a data collection protocol, and drawing logical conclusions from the analysis. This should all be done in collaboration with stakeholders, people who will have an interest in or be affected by the results and their application. 

WHY DO WE NEED TO DO, USE, AND CONSUME EVALUATION? 

Evaluation is an essential part of any scope of work. In fact, the Centers for Disease Control (CDC) identifies evaluation as an essential public health service. Why? By knowing what is working and what is not, for whom, and in which contexts, tobacco control programs can be more successful in changing norms that promote a tobacco-free California for all. Because of this, all projects funded by the California Tobacco Control Program (CTCP) are required to carry out evaluation activities as part of their tobacco control efforts. 

In tobacco control work, projects use advocacy- and utilization-focused evaluation to improve program strategies and next steps, justify the need for a program’s continued existence and funding levels, learn from successes and challenges, and measure the effectiveness of project efforts, whether the intended outcome is achieved or not. 

Evaluation also provides ample opportunities for community involvement when done as a collaborative process between the staff and advocates conducting activities and the stakeholders affected directly or indirectly by program activities. Involving the community throughout the evaluation process helps make project efforts and decisions more reflective of the needs and lived experiences of those who will be affected by program objectives. 

HOW IS EVALUATION CONDUCTED?

Evaluation goes through an entire “life cycle,” from planning to data collection to reporting and sharing results. Throughout the whole process, evaluation teams must keep in mind the activity purpose and audience. Focusing on the activity utility and audience helps to make decisions about each step of the process. The remainder of this guide explains how each phase of the cycle works. 

A graphic depicting the evaluation process

 

WHO SHOULD BE INVOLVED IN EVALUATION?

Hint: It is more than just the project’s evaluator! While the evaluator should provide expertise in conceptualizing and designing activities, facilitating and data collector trainings, or analyzing and interpreting results, other team members also play a role in the various phases of evaluation. 

Tobacco control work is all about advocacy: building and then mobilizing a compelling groundswell of support for policy action. Evaluation plays an important role in that process, not only to explore the views of various stakeholders but also to involve them in evaluation design, data collection, and sense-making so they feel that they have ownership of the findings. When, who, and how to involve stakeholders is the question. See Planning, Conducting Evaluation, and Reporting sections for more on this topic.

Evaluation provides multiple opportunities for inclusion, engagement, and cultural humility throughout the evaluation activity lifecycle. Practicing cultural humility in evaluation more accurately tells the story of a broader swath of the community and shows appreciation and recognition of the community’s contributions. Involving a wider range of stakeholders in evaluation activities can provide local context crucial to accurate analysis and interpretation of data. Evaluation is an opportunity to tell the story of communities and public health programs.

WHEN IS EVALUATION CONDUCTED?

Not just at the end of the funding period! Evaluation is conducted throughout the entire scope of work and is strategically timed to support and/or inform intervention activities. 

  • At the beginning of the scope of work, evaluation is used to assess and document the need for the objective, establish baseline measures (starting levels before intervention activities), and to build bridges with key elements of the community (find out how to approach and engage a population, learn cultural norms, and hear their perspectives). 
  • As project activities are carried out, evaluation is used to support and inform the program (building a case for action, identifying best approaches, and assessing when to move ahead and when more evidence or education effort is needed). The evaluation allows a project to figure out if it needs to adjust and adapt its intervention activities. Because of this, it is crucial that evaluators analyze and report results back to the project as quickly as possible so next steps can be taken. 
  • Toward the end of the scope of work, evaluation can be used to compare conditions after the intervention to the baseline in order to measure change, gather feedback on how strategies worked, identify what turned out to be crucial tipping points, and make recommendations for improvement. Comparing conditions or beliefs that changed over time is a good way to demonstrate the effect of project interventions.

Intervention and evaluation activities go hand in hand, but implementation does not always go as planned. There will inevitably be a need for flexibility and adaptability to meet circumstances that arise. Sometimes external factors can throw a wrench into the plan and timeline and it becomes impossible to get enough data. Other times, the results may bring up more questions that need to be answered. In either case, be sure to communicate with the whole evaluation team including the CTCP project consultant so that the project can adapt to unexpected changes.

WHERE IS EVALUATION CONDUCTED?

The “where” of evaluation depends on the objective and what is being measured. Projects should consider the population of interest, sources of data, and locations or vantage points for data collection. The work is carried out in specific jurisdictions selected in advance as stated in a project’s objective and fine-tuned though intervention activities such as the Midwest Academy Strategy Chart (MASC). 

Objectives are developed based on procurement specifications, negotiations with CTCP, community input, and/or the Communities of Excellence (CX) needs assessment process which analyzes the scope of the problem, community and decision maker support, past work, and other conditions for a given indicator or asset in the community. Often, several jurisdictions are initially targeted and then, depending on which look more promising based on project goals and jurisdiction needs, one or more becomes the focus of project efforts and resources.

USING EVALUATION IN TOBACCO CONTROL 

The professional field of evaluation has been expanding to include several different theories and approaches such as program evaluation and impact evaluation to define a public health program and improve its effects, advocacy evaluation and utilization-focused evaluation to inform and support public health policy, environmental, and community change, and participatory evaluation and empowerment evaluation to address the inherent imbalance of power in any evaluation by including stakeholders as co-evaluators for capacity building and sustained change. Evaluators may be trained in or find guidance and inspiration from a combination of these approaches and many more sources.

TOBACCO CONTROL IN CALIFORNIA 

The California Tobacco Control Program (CTCP) funds city/county health departments (Local Lead Agencies (LLA)), community-based organizations (competitive grantees), behavioral health service providers (health systems, hospitals, clinics, treatment centers), universities and non-profit organizations (statewide technical assistance providers, statewide projects, other surveillance evaluation contractors) to pursue tobacco control policies or provide related services. Each funding procurement has its own requirements, but almost all include a complex evaluation component.

In a project workplan, intervention and evaluation activities fit together in a strategic sequence so that the timing of activities builds increasing power and influence, leverages momentum, and leads to a policy win. Below is a visual representation of the sequence of strategies in a typical policy campaign.

Timeline

Description automatically generated

EVALUATION REQUIREMENTS

Evaluation plays an integral role in tobacco control efforts and for that reason, there are a number of important requirements that prescribe how evaluation is expected to be used and conducted:

  • S.M.A.R.T. Objectives: Every project develops a workplan with objectives as specified in the funding procurement materials. Each objective specifies a SMART objective, describes the purpose and scope of intervention and evaluation activities that move the objective forward, states the timeframe of when activities will be completed, sets percentage deliverables as part of the total budget, and identifies responsible parties and tracking measures. For more guidance on this, see Evaluation Planning
  • Be Bold: The funding agency, CTCP, supports plans that are ambitious, logical, and cost effective. Instead of adding unnecessary activities or setting unreasonable sample sizes, workplans should demonstrate the utility and rigor of activities by framing the timing sensibly, anticipating convincing samples, incorporating data collector trainings where needed, diversifying the stakeholders involved, and building in time to engage disempowered segments of the population. Projects are held accountable for completing activities not for achieving the objective, so it is acceptable to experiment with creative strategies.
  • Community Needs Assessment: Before developing workplans, projects conduct a community needs assessment to determine priorities and formulate objectives. 
    • For LLAs, this participatory process is required by Prop-99 enabling legislation and is operationalized as the Communities of Excellence (CX) process. CX involves a variety of community members and stakeholders in the assessment and decision-making. It consists of looking at relevant community-level data across a set of indicators and assets and then ranking community need, readiness for change, and likelihood of success. These rankings are used to select priority indicators and assets to develop objectives for the new workplan.
    • For most other projects, the needs assessment process is less prescribed and might include taking part or utilizing the results of in the CX processes of nearby projects or at least conferring with LLAs or existing agencies to see how project efforts may support and complement each other. 
  • Theory of Change: As part of the workplan, projects identify a theory of change model on which is based on expectations for success. A logic model can be used but is not a required to submit the workplan. For more on this, see the Plan/Application section on the Online Tobacco Information System (OTIS).
  • Evaluation + Intervention: Evaluation activities are used strategically to support and inform intervention activities with the aim of organizing communities to take actions promoting progress toward achieving the objective. 
  • OTIS Wizards: Guidance for developing and writing intervention and evaluation activities is available in the form of wizards found in OTIS. See Building an Evaluation Activity for an example of wording and deliverables for an evaluation activity. For more guidance on developing an evaluation plan, see Evaluation Planning.
  • Outcome and Process Measures: Every evaluation plan makes use of process measures. Only certain plan types also include outcome measures. In California Tobacco Control “Outcome” is more narrowly defined than in other evaluation situations as something observable that confirms the implementation of policy, systems, environmental, or behavior change. Rather than whether or not a policy has passed, an acceptable outcome measure for a tobacco retail licensing (TRL) objective, for example, would be comparing a tobacco purchase survey before and after TRL policy implementation. For more about what is and is not an acceptable outcome measure, see Common Outcome Measures
    • Pre/Post Measures: Due to budgetary limitations, few tobacco control evaluations include a comparison group, so pre-/post-measure comparisons are commonly used to measure outcomes. However, in some cases a post-intervention-only observation or survey can also serve as an acceptable measure of effective policy implementation.
  • Plan Types: The focus of each objective connects to the plan type, a drop-down menu choice in OTIS, that will determine whether Outcome Evaluation is needed. The plan types are: Legislated/Voluntary Policy – Adoption Only, Legislated/Voluntary Policy – Adoption and Implementation, Legislated/Voluntary Policy – Implementation Only, Individual Behavior Change, Other WITH Measurable Outcome, and Other WITHOUT Measurable Outcome. See Evaluation Planning for details. Unless there is an important reason to do so, and a tangible effect/impact that an asset objective specifically plans to measure, most Other plan types should be WITHOUT Measurable Outcome. 
  • Waves of Data Collection: Waves refer to two or more rounds of data collection conducted in the same manner among the same population of interest within a predefined period of time or activities in between each round. Multiple waves of data collection should be part of the same evaluation activity when data is collected in more than one period of time, in the same way, from the same sample, and using the same instrument.
    • However, if the topics or questions have different purposes or the data collection instruments differ meaningfully in each of the waves, then the data collection should be written as two separate evaluation activities. For example, if in the baseline wave key informants are asked about what they know about youth vaping and what role government should play in regulating youth access, and then in the follow up wave questions focus on how the policy got passed, this would not be considered two waves of the same interview protocol. Instead, these should be two separate key informant interview (KII) activities. 
    • Another example of activities that are not performed in waves include collecting data from one jurisdiction and then collecting data in another jurisdiction several months later. The data are measuring two different communities at two different time points and likely will require separate analyses rather than combining or treating them as waves of data collection.
    • Note: It is acceptable if the composition of the sample is not comprised of exactly the same individuals as long as it is from the same population. For example, public opinion polls collected from the same community but that may not capture data from the same individuals in each round or new locations opening and others closing for a retail or park observation.
  • Workplan in OTIS: The workplan is entered into OTIS, which operates as a monitoring and reporting system for projects and program consultants at CTCP to track progress every six months. Once approved, the workplan must be followed exactly unless the project gets permission from its Program Consultant (PC) to deviate from the plan for logical reasons. All changes should be documents in the OTIS Communication Log. Depending on the extent of the change, a scope of work revision may also be needed.
  • Evaluation Deliverable Percentages: Depending on the funding procurement, there may be minimum or maximum requirements for the total deliverable percentage allocated to evaluation. If there is a requirement, it will be specified in the procurement materials. 
  • Collaboration and Communication: While evaluators are encouraged to set up methods for getting progress updates from the projects they evaluate, certain internal activities such as KII with project staff health educators or focus groups with coalition members involved in intervention activities) may NOT receive a percentage deliverable in the project workplan. In fact, they do not need to be in the workplan at all, although they should be part of the evaluator’s subcontract. If listed in the workplan (to remind projects to conduct such actions), they should be listed as collaboration activities and set at 0% deliverable. Communication should be a natural part of interactions between evaluators and project teams.
  • Evaluator Time:
    • External evaluators negotiate their scopes of work directly with the project. They are expected to apply their expertise and lend objectivity to developing data collection instruments, analyzing data, reporting results back to the project, and writing evaluation reports (even if others contribute portions of the content). See the section on Working with Evaluators for more about how projects and evaluators can divvy up responsibilities. 
    • For most projects, a qualified evaluator must spend a minimum of 4 hours on developing and/or reviewing the workplan for a new application or proposal. Once submitted, plans are reviewed by teams of CTCP staff, subject matter experts, and evaluators who score and provide feedback for strengthening the plan if funded. 
    • Most funding procurements also require at least 10% of a staff person’s time or about four hours per week for oversight of evaluation activities ensuring that evaluation is being used to inform intervention activities and overall program strategies and to maintain communication between program and evaluation efforts.

Lastly, review the funding procurement materials for additional details about evaluation and scope of work requirements available on the CTCP Tobacco Control Funding Opportunities and Resources (TCFOR) website.

EVALUATOR REQUIREMENTS IN TOBACCO CONTROL

The previous section described the evaluation requirements for tobacco control programs and how they may be unique compared to other programs. This section focuses on the requirements specifically for the evaluator and apply to external evaluator, internal evaluator, and internal evaluator project manager positions.

LOCAL PROGRAM EVALUATOR QUALIFICATIONS

In California Tobacco Control, projects must have a trained evaluation specialist to lead the project’s evaluation efforts. Evaluators must meet basic requirements to register in the Local Program Evaluator Directory which are:

  • Complete at least one course in study design or at least one year of experience in determining the study design for an evaluation.
  • Have intermediate or higher proficiency in calculating sample size, developing a sampling scheme, and determining appropriate data collection methods.
  • Complete one course in program evaluation or one year of planning and implementing a program evaluation.
  • Have intermediate or higher proficiency in evaluating behavior change, policy, or media interventions.
  • Complete at least two intermediate courses in statistics.
  • Have intermediate or higher expertise in using statistical software packages to analyze and interpret quantitative data.

Evaluators work collaboratively with project staff, coalition members, and other stakeholders to prepare the evaluation plan, conduct evaluation activities, prepare progress and evaluation reports, and use evaluation for strategy development and decision-making. 

Evaluators also have statewide and national professional organizations to turn to for guidance about how to do evaluation. Here are a few important resources to review: 

Each funding opportunity will provide additional requirements when it comes to evaluator requirements such as percentage of the budget on evaluation staff or identifying a staff person as the lead for evaluation activities. For details, see the requirements specific to the funding application procurement and the Policy Section of the CTCP Administrative and Policy Manual.

CULTURAL HUMILITY IN EVALUATION

State and local tobacco control program efforts continue to focus on priority populations most impacted by tobacco-related health disparities, and as they do, intervention and evaluation strategies must also be adapted to cultural norms. An effective way to do this is by applying the framework of cultural humility to tobacco control evaluation activities. 

Cultural humility means “having an interpersonal stance that is other-oriented rather than self-focused, characterized by respect and lack of superiority toward an individual’s cultural background and experience.”[1] It focuses on personal, team, and/or organizational self-reflection and a willingness to approach evaluation work with an open mind that makes space for valuing and learning from the knowledge and experience of others. 

Tervalon and Murray-Garcia introduced cultural humility principles including lifelong commitment to self-evaluation and redressing power imbalances.[2] Practicing cultural humility requires public health practitioners to examine their own assumptions, biases, and/or perspectives that impact their work. If these have been institutionalized in some way, projects must then develop strategies to mitigate these errors. It is important to consciously address the power differentials that exist in evaluation not only between the evaluator and what or who are being evaluated but also in the very framework of evaluation itself. 

 


[1] Hook, Joshua N., Davis, Don E., Owen, Jesse, Worthington Jr., Everett L., Utsey, Shawn O (2013). Cultural humility: Measuring openness to culturally diverse clients. Journal of Counseling Psychology, Vol 60(3), Jul 2013, 353-366 https://pubmed.ncbi.nlm.nih.gov/23647387/

[2] Tervalon, M., & Murray-García, J. (1998). Cultural Humility Versus Cultural Competence: A Critical Distinction in Defining Physician Training Outcomes in Multicultural Education. Journal of Health Care for the Poor and Underserved 9(2), 117-125 https://muse.jhu.edu/article/268076/pdf


 

There are also built-in assumptions and biases from each team member involved in the evaluation work. Each person brings something different to the table and inevitably influences the team’s access, strategies, and priorities, which impacts the results of the data that are collected and used to make decisions about the health of a community. 

When applied to community education, policy advocacy, data collection and analysis, and reporting results to stakeholders, cultural humility can lead to mutual empowerment and respect, stronger partnerships, and lifelong learning. This may include embracing the certainty that no one is an expert at everything, so everyone has something to learn. 

It is in that journey to learn more about other cultures and communities that tobacco control advocates can become more competent and inclusive. Tobacco control partners are then better equipped to empower their communities and address health disparities. 

There are multiple ways to create a safe space for additional perspectives and infuse cultural humility in evaluation work. Here are some ideas:

  • Take the time to learn about the cultural groups involved in a program and/or evaluation.
  • Recognize that diversity means relationships of difference, including differences in communication, life view, definitions of family, identity, culture, experiences of institutional racism/sexism/ageism/homophobia/and other biases.
  • Recruit staff and stakeholders from diverse backgrounds to provide input and help with the evaluation process.
  • Avoid tokenism, when an organization considers the perspective of one person to be reflective of their entire identity group.
  • Pilot test data collection instruments to ensure that questions and response options are appropriate for target populations.
  • Avoid jargon and exclusive language and behaviors.
  • Be reflective both as an individual and with the team. Consider maintaining a journal or hold regular meetings to try and identify how results may be affected by biases and assumptions. These records can help projects be more transparent about the decision-making process.

For more tips about how to practice cultural humility in evaluation, see the EvalYOU article on the Tobacco Control Evaluation Center (TCEC) website, Cultural Humility in Evaluation: There Are No Experts. 

 

EVALUATION PLANNING

LAYING THE FOUNDATION OF THE EVALUATION WORKPLAN

This section of the guide provides information about the needs assessment process, developing objectives, how to use end-use strategizing for evaluation planning, and some helpful prompts to facilitate the planning process.

A lot of attention is placed on evaluation during the contracting and negotiation phases for CTCP-funded projects because a major part of a scope of work is the evaluation plan. The evaluation plan serves as a road map for work that is to come, from the early stages of defining a problem that needs solving to measuring the impact of the solution after implementation. It also provides an opportunity to reflect on what the project is trying to change and needs to be measured, to consider whose voices are being (and should be) included, to foster buy-in and transparency with stakeholders, and to get everyone on the same page from the start.

Since the project is held accountable to what is detailed in the scope of work, it is a good idea to incorporate the rationale for decisions into the narrative of the evaluation summary and each activity description so that these choices make sense later on including why certain jurisdictions were targeted, selection of sampling strategies and sizes, choices of data collection methodologies, and other details that will help staff that are implementing the plan.

COMMUNITIES OF EXCELLENCE (CX) PROCESS 

It all begins with a community needs assessment process where the project takes stock of the successes and policy gaps in the coverage area. For LLAs, this is a pre-defined procedure required by the funding agency, CTCP, which involves reviewing local level data on a certain number of pre-determined as well as elective indicators. Based on the data, the project and coalition members rate community need, awareness, readiness, likelihood of success, etc. for each indicator or asset to help select project priorities for the next scope of work cycle. For specifics about the CX needs assessment tools and process, see the CX tab on Partners. 

For other projects, the needs assessment process is typically much less involved. It may entail looking at past needs assessments, satisfaction surveys, local data, policy status, and talking with the nearby LLA, technical assistance providers, and/or other projects in the region to see what potential objectives may be needed in nearby communities.

ESTABLISHING OBJECTIVES

Once there is a consensus about which indicators or assets the project should address, it is time to develop SMART objectives:

  • Specific
  • Measurable
  • Achievable
  • Realistic
  • Timebound. 

Objectives should include a target jurisdiction or population, completion date, and identify a specific result that will be achieved. An alternative to naming the specific jurisdiction can be to state a minimum (e.g., at least three cities in the North Coast part of Beachfront County such as Apple, Orange, and Kiwi town…)

Here is an example of a SMART objective: 

“By June 30, 2025, Junction City will adopt and implement a policy making all new multi-family housing complexes 100% smokefree and will include smoke or vapor from tobacco, marijuana, or any other substance from all common indoor and outdoor areas of the complex as well as individual units, balconies, patios, walkways, and a distance of at least 25 feet from doors and windows.” 

Objectives determine the direction and allocation of resources to the communities a project intends to work in. Without the logic of clearly written objectives, it is much more difficult to ensure that the intervention and evaluation activities in the scope of work are working together to support achievement of the objectives. Starting with a SMART objective helps to align intervention and evaluation activities in the scope of work. Evaluation helps defines the extent of the problem within a community and the solutions needed to improve or eliminate the problem. It is not enough to assume that because a community lacks a policy, it needs a change. Evaluation will first help a project determine and communicate the need for policy, systems, and/or environmental change. For example, conducting a Young Adult Tobacco Purchase Survey can demonstrate a problem for underage sales of tobacco. Once the project, in conjunction with their community, determines a solution to the problem, evaluation is then used to bolster efforts and evaluate the effectiveness of messaging, training, and technical assistance. As policy, systems, and/or environmental change is adopted and implemented, evaluation measures the impact the change had in solving the problem over time.

HELPFUL PROMPTS FOR DEVELOPING AN OBJECTIVE 

  1. What is the problem the project is trying to address (e.g., drifting secondhand smoke in apartments)? This identifies the indicator or asset to select
  2. What health inequity will the objective address (e.g., low-income individuals who have chronic diseases such as asthma and heart problems are also more likely to live in multi-unit housing where those conditions can be exacerbated by exposure to secondhand and thirdhand smoke)? The evaluation may be able to determine the impacts of a citywide policy on hospital visits or treatments for such diseases among specific populations. 
  3. What is the project trying to achieve with the objective? 
    • At what level will the change occur (county, city, specific neighborhood or population, household, or individual)? This is crucial in order to define the unit of analysis to measure.
    • Where is the greatest need and/or the most readiness? Name the specific target jurisdiction(s) or organization(s).
  4. What populations or jurisdiction(s) will be most affected/benefitted from this? Be sure to think about who among the most vulnerable in the population can be benefitted and engaged, rather than just making the objective and its approach one-size-fits-all.
  5. What action does the project want to take place (e.g., for city council to adopt and implement a citywide policy making 100% of apartment complexes smokefree; consider intermediate steps)? Here is where the project begins to put into words how success will be determined.
  6. Who has the power to do or give what is needed (e.g., a majority of city council members)? This determines the focus of the activities and the plan type.

OBJECTIVE WRITING TIPS

  • Each objective serves as a starting point and together with intervention and evaluation activities form the project’s work plan. All of the objectives and work plans together become the project’s scope of work for the funding period.
  • Objectives should not contain ranges for target jurisdictions, trainings, members, etc. Objectives may, however, set minimums (using wording such as “at least X,” “a minimum of X,” or “two of the following cities: Burney Falls, Mt. Hood, Flat Plains”).
  • Unless there is a good reason to do otherwise, set the target date as the last day of the funding period. The objective can always be achieved earlier, but this way, there is room to overcome unforeseen circumstances that may delay project efforts. For adoption and implementation activities, implementation may take place several months after policy adoption is completed. 
  • It can make sense for the project to diversify the stretch and type of objectives it plans to tackle in the same funding period. In situations where policy wins seem within reach, the objective should include policy adoption and implementation activities while topic areas that represent new efforts might aim for policy adoption alone. See Evaluation Plan Type and sections under Evaluation Planning
  • Once all this is worked out, it is a good idea to begin writing the background section of the final evaluation report that will be due at the end of the funding period. While the details are fresh in mind, document the hopes and rationale for choosing this particular objective at this point in time which targets specific jurisdictions. Not only does this mean there will be one less thing to do when it is time to write the report, this background can also be an important touchstone to guide new personnel as they join the project. 

EVALUATION PLAN TYPE

The previous section of the guide describes how to get started with evaluation planning using the Communities of Excellence needs assessment or other background information collection, and how to start developing objectives. The aim and target of an objective helps determine the plan type selection. In order to serve as a strong foundation for a series of decisions that follow, the two components must be a good match for each other. 

OPTIONS FOR EVALUATION PLAN TYPE

The evaluation plan type determines the types of data collection activities that are required to support the project’s objectives and whether the evaluation plan must include process and outcome data collection and analysis. As illustrated in the diagram below, there are four evaluation plan types: 

  1. Legislated/Voluntary Policies
    • Adoption Only
    • Implementation Only
    • Both Adoption and Implementation
  2. Individual Behavior Change
  3. Other with Measurable Outcome
  4. Other without Measurable Outcome

Each of these plan types is described below.

Note: All objectives benefit from process evaluation activities. 

VOLUNTARY POLICY ADOPTION AND/OR IMPLEMENTATION

There are a number of indicators for which voluntary policy adoption and/or implementation are the best ways to achieve change. Such voluntary policies may include college, faith-based, healthcare, and private school campuses. For voluntary policy work in these areas, both adoption and implementation activities are strongly encouraged, because it will help measure the effectiveness of a policy that has a smaller reach and impact which may not be measured by other public health data in a community. Also, one benefit of voluntary policy adoption in these areas is that the adoption and implementation work can build toward larger, legislative policy actions in the future, which may benefit from data obtained about the local success of voluntary policies and the residual benefit of capacity building while working with smaller subsections of the community. For example, a college campus or faith-based institution that is engaged in a voluntary policy effort may become the advocates that lead the charge for future legislative policy change.

VOLUNTARY OR LEGISLATED POLICIES 

Most objectives work at the community level by trying to adopt and/or implement legislated or voluntary tobacco control policies in one or more jurisdictions. CTCP prefers objectives that pursue legislated policy rather than voluntary policy because laws passed by city or county officials cover the entire jurisdiction and have a greater reach than voluntary policies. They are also much harder to overturn. Getting the city council to adopt and implement a law making all new apartment complexes smokefree in the whole city is an example of a legislated policy objective, whereas asking individual apartment owners or managers to make their complexes smokefree would be a voluntary policy objective.

INDIVIDUAL BEHAVIOR CHANGE 

Objectives with a more limited scope attempt to change individual behavior choices and actions. Examples of this type of objective might have to do with aiming to increase the number or percentage of a certain population who receive cessation treatment and remain quit for six months or more. Because of the much smaller scale of the population impacted by such objectives, these types of objectives are only approved under certain special circumstances.

OTHER WITH MEASURABLE OUTCOME 

The least common plan type is Other with Measurable Outcome. This is only used when there is a real change or Outcome (with a capital O) or effect to measure, something more than just a result or count. For example, an objective focused on asset 2.1 to provide training to diverse community groups to engage in tobacco control work would NOT be Other with Measurable Outcome because the result would just be the number of organizations that participated in a training. However, if the objective focused on asset 3.6 equity in funding with the goal of training a minimum of 10 diverse community groups on how to apply for tobacco control funding so that 50% of them are successful in receiving funding, there is an action that results from the intervention and this would be Other with Measurable Outcome plan type. This means that the plan will require an Outcome Evaluation activity.

OTHER WITHOUT MEASURABLE OUTCOME 

In almost all instances for asset objectives, the option, “Other without Measurable Outcome” is the appropriate the plan type. Very few asset objectives appropriately qualify as having a measurable outcome. An objective that aims to increase membership or conduct a certain number of trainings does NOT count as a measurable outcome in California Tobacco Control. This means that the plan will not need an Outcome Evaluation activity. For example, a statewide grantee applicant wants to use Asset indicator 2.1, Training and Skill-building, to provide training to diverse community groups to engage in tobacco control work, the objective would not require any measurable outcomes, because the crux of the evaluation plan would be measuring the number of organizations that participated in training activities, which does not seek to measure change over time.

When in doubt, choose Other without Measurable Outcome as the plan type for an asset objective.

PROCESS vs. OUTCOME EVALUATION

Depending on the project objective language and the plan type, an evaluation plan will call for process evaluation activities and perhaps an outcome measure as well. These each serve different purposes. To find out more, this section dives into what these terms mean and what they tend to consist of in California Tobacco Control.

PROCESS EVALUATION 

While different plan types require a combination of data collection activities, ALL California Tobacco Control objectives benefit from the use of process evaluation components. Process evaluation activities are used to provide insights and feedback to the project about how to improve and/or enhance the intervention strategies and activities. Process evaluation activities examine what activities took place, who conducted the activities, who was reached by the activities, the quantity and effectiveness of activities delivered, the appropriateness of messaging, and/or satisfaction with project activities. 

The table below presents common process evaluation activities and typical uses based on the timing of the activity.

Using Process Evaluation
ActivityInformation SourcePre-interventionDuring InterventionPost-intervention
Education/Participant SurveyPeople who have participated in an educational event or other activity put on by the project, data collector trainings, coalition activitiesTo establish status of knowledge or experience before the interventionTo improve presentations or other educational activities, to assess data collector readinessTo learn extent to which presentation/activity had the desired effect
Focus GroupsSeries of group discussions with people who share something in common: e.g., coalition members, tenants, retirees, priority populationsTo learn what approaches work with specific populations; to brainstorm strategies; to test instruments or messagingTo refine talking points, for material testingNot common, but sometimes used to reflect on process
Key Informant InterviewsPeople who have in-depth experience or specialized knowledge: Decision makers, community leaders, etc.To identify potential barriers, promising strategies, additional stakeholders.To learn status and effectiveness of strategies used so farTo learn what worked and didn’t work in the project, identify potential implementation issues
Media Activity RecordPrint or publications, radio, TV coverage, newsletters, etc.To document media gaps to target media activitiesTo monitor opportunities to implement the communication plan goals, to correct or augment the coverage of tobacco control issues in real timeTo determine if media activities are reaching the right audiences with the desired messaging document amount, nature and reach of media activities
ObservationBehaviors (smoking), objects (signage, tobacco litter, products, ads in stores) events, locations (housing, campuses, parks, beaches)To learn about the extent of a problem (such as litter in parks): to serve as a baseline measureTo keep data fresh, add another wave of data collection to the overall analysisNot common in process evaluation, frequently used as outcome measure
Policy Record ReviewRecords maintained by government agencies, tenant councils, or other institutions; biographical background, voting records and past actions on similar issuesTo identify issues and supporters/ opposition of past policies, and interest in proposed policyTo monitor progress toward policy goal, to update records after an election or new staffingTo document achievement of policy goals (record of discussions, votes, important dates, enforcement mechanisms, etc.)
Public Opinion SurveyPeople who would be affected by a policy: tenants, people at outdoor events, people waiting for a bus, people in the specific jurisdictionTo lean the extent of public knowledge about issues, and/or support for (or opposition to) a proposed policyTo gauge midstream status of knowledge and/or support,and advise projects on how to adjust midstreamTo assess post-intervention status of knowledge, experiences, and/or support
Tobacco Purchase SurveyRetail stores and other tobacco outletsTo provide information to decision makers about the extent of the problem of sales to minors To keep data fresh, add another wave of data collection to the overall analysisNot common, except for compliance objectives (in which case it would be an outcome measure)
OtherExamples: Photovoice, Website/Google Analytics, Facebook Analytics, etc.To determine status/scope of whatever is being examinedTo monitor statusDocument adoption or “other without measurable change” objectives - e.g Has use of the website increased?

People sometimes confuse process and outcome activities with the kind of data they produce, as in qualitative or quantitative. However, these terms are not synonymous. While Outcome Evaluation data is typically quantitative (countable), process evaluation activities can produce either qualitative (descriptive) or quantitative data, and sometimes both.

OUTCOME EVALUATION 

While all objectives benefit from process evaluation activities, the same is not true for Outcome Evaluation activities. The use of Outcome Evaluation activities depends on the evaluation plan type and typically need to measure a change over time.

In California Tobacco Control, the term Outcome Evaluation (which is distinguished here with a capital “O”) refers to evaluation activities that include acceptable measures to gauge a project’s efforts towards meeting its objectives. Examples of appropriate outcome measures include decreased tobacco litter, fewer smoking incidences, and reduction in illegal tobacco sales. Beyond just the passage of a policy or the addition of coalition members, the Outcome measure must be something observable that demonstrates the implementation of the objective. It is crucial that the Outcome is consistent with the goal specified in the objective. This is why a strong and well-crafted SMART objective is a prerequisite to writing an effective evaluation plan.

Outcome measures often (but not always) compare conditions several points in time to demonstrate change from before to after program efforts, often looking at quantitative data such as the number of tobacco ads, the number of tobacco retail licenses obtained, the number of smokefree housing units. Because funding limitations do not typically allow for data collection in comparison communities, Outcome Evaluation does not typically seek to prove the change occurred solely because of the intervention, just that there is a change. The number of waves required for outcome measurements depends on the evaluation design chosen.

WAVES OF DATA COLLECTION

Waves refer to two or more rounds of data collection conducted in the same manner among the same population of interest at different specified points in time. Collecting multiple waves of data allows the project to measure progression and change over time. The most common example of this is the pre- and post-design where data are collected both before and after an intervention has been implemented such as a tobacco purchase survey conducted before and after a tobacco retail license policy is implemented. Some data, used for continual feedback and improvement, are collected in recurring waves, such as with coalition surveys which are administered annually. In California Tobacco Control, activities that are performed on an ongoing basis, such as media activity record or website analytics, are categorized as one continuous wave over the whole funding period.

There are some exceptions, but for the most part, methods for collecting data should remain the same in each wave. It may not be feasible to the exact same people involved in each wave. For example, public opinion surveys will rarely capture the specific individuals for every wave or city council members change after elections. What is more important is that the sample is drawn from a similar group of subjects and that the circumstances under which the activity is conducted are the same for each wave. This means, use similar data collection locations, day of the week, time of day, weather conditions, if there is an event or not, etc. Keeping the same circumstances gives a better chance of getting the same type of subjects in the sample across all waves.

IMPACT EVALUATION

There is one level of evaluation that goes beyond looking at immediate Outcomes. Impact evaluation goes a step further in rigor by seeking to measure effects attributable to project efforts – a direct cause and effect. However, to date this evaluation has not been regularly conducted at the local level. 

To establish impact, projects can collect primary data about community-level changes in prevalence, disease risk status, morbidity, and mortality, particularly the long-term effects of tobacco control work on potential disparate impacts on priority populations. Alternately, data obtained through statewide surveillance systems may be used. Before including activities in an evaluation plan that make pre/post comparisons, ensure relevant data will be available for target locations in the desired timeframes. Using these secondary data sources is an efficient use of existing resources and generally reflect a very small deliverable percentage. 

Most of the statewide surveillance data sources such as those listed below provide important tobacco-related data for priority populations. These sources will likely be important for endgame efforts and measuring overall tobacco use rates and trends. CTCP created the Data User Query System (DUQS), an online tool that helps CTCP-funded projects search multiple data sources available at https://www.tcspartners.org/ESS/TableauDashboard.cfm.

  • Youth
    • California Student Tobacco Survey (CSTS)
    • California Youth Tobacco Survey (CYTS)
    • Teens Nicotine and Tobacco Study (TNT)
    • California Healthy Kids Survey (CHKS)
    • Smokers Helpline
  • Adult
    • Behavioral Risk Factor Surveillance System (BRFSS)
    • Online California Adult Tobacco Survey (CATS)
    • California Health Interview Survey (CHIS)
    • California Smokers Helpline
  • Retail
    • California Tobacco Purchase Survey
    • California Tobacco Retailer Poll
    • Retail Scanner Data Analysis
    • Tobacco Retailer Mapping (CTHAT)
    • Healthy Stores for a Healthy Community (HSHC)
      • Note: HSHC ended in 2019. Data can be used as a baseline measure.

COMMON OUTCOME EVALUATION MEASURES

Certain data are more acceptable than others as Outcome Evaluation measures for objectives. The table below list commonly used outcome measures specific to different topics for objectives that require Outcome Evaluation:

Common Policy ObjectivesAcceptable Outcome MeasuresLess Robust Outcome MeasuresUnacceptable Outcome Measures
Flavored tobaccoStore observations, HSHC dataCalling stores, self- reported retailer surveyPolicy record review counting policies
Minimum price, pack volumeObservation of prices, pack sizes, discounting offers, HSHC dataCalling stores, self- reported retailer surveyPolicy record review counting policies
Smokefree outdoor dining, bars, service areas, recreational, non-recreational public areasObservation of signage, tobacco litter count, number of people smoking/ vapingCATS secondhand smoke awareness questionsPolicy record review counting policies, key informant interviews with managers/ staff of restaurants, parks, chamber of commerce
Smokefree multi-unit housingObservation of signage, tobacco litter, number of people smoking/vaping Number of complaints (to managers or to health dept.) about SHS or litter, policy record/document review measuring signed leases or addendums with smokefree language 
Tobacco Retail LicenseYoung Adult Tobacco Purchase Survey, Young Adult Electronic Purchase Survey, store observation, HSHC dataCSTS self-reporting purchasing cigarettes from a store, number of licenses purchased, number of citations issued for violationsPolicy record review counting policies
Tobacco Retailer Density, ZoningCTHAT, density mapping by the project Policy record review counting policies
Tobacco-free pharmacies and health care campusesObservation, HSHC dataInterviews or surveys with pharmacies or health care providersPolicy record review counting policies
Behavioral health cessation treatment program California Smokers’ Helpline call reports, self-reported surveys of post treatment quit rates (completion, 3 months, 6 months, 1 year)Number of participants who received cessation services

EVALUATION DESIGN

Up to this point in the evaluation planning process, three components have been discussed: the objective wording, evaluation plan type, and process and outcome activities. The fourth type of decision related to evaluation planning, evaluation design, is discussed in this section. A description of the types of evaluation design are provided below, as well as the design type most common in California Tobacco Control programs.

EVALUATION DESIGN

There are three options for evaluation design in tobacco control: 1) non-experimental; 2) quasi-experimental; and 3) experimental. The evaluation design for most CTCP-funded objectives is classified as non-experimental design. 

For objectives that require Outcome Evaluation, there are two feasible evaluation design choices: non-experimental or quasi-experimental. Each of these is described in more detail below.

Non-experimental design, the most common design, involves comparing data from the community of interest (also known as the intervention group) before and after a program’s efforts. This is the least rigorous study design because it can only provide a weak indication of a possible connection between program efforts and the intended outcome. However, it is useful in many situations when a stronger design is not appropriate or when resources are limited.

Quasi-experimental design is more rigorous because it involves either comparisons with other groups or multiple measurements over time. With comparison groups, the intervention group and a control group have similar baseline characteristics so that any difference between the two groups measured after interventions were made can be better attributed to program efforts. Assignment to groups is organized by demographic characteristics, convenience, availability, or non-random. Multiple measurements over time (at least three points in time, e.g., before, during, and after, or one before and two after policy implementation), allows projects to treat one group as both the treatment and comparison group. If there are only two points in time, it must be classified as non-experimental. 

In OTIS, projects may also see the option experimental design, but it is not used in local programs. This is the most convincing and rigorous study design for demonstrating that program efforts caused the intended outcome. A randomized control trial is a type of experimental design that requires at least one intervention group, one control group, AND randomized assignment of the groups in the study. However, this method is resource intensive and often unethical in local California Tobacco Control contexts as we would not intentionally withhold lifesaving interventions from a group simply for research purposes. For this reason, local projects should employ either a non-experimental or quasi-experimental design.

Note: The design type should be stated in the evaluation summary narrative of the workplan for each objective.

INTRODUCTION TO DATA COLLECTION ACTIVITIES

Below is a list of evaluation activities that are commonly used in California Tobacco Control programs. Projects may choose from among this list and/or build evaluation activities from scratch. A definition of each of these activities, common uses, and best practices is provided by clicking on each link. Information on how to build an activity is provided in the next section, as is a description of writing templates called “wizards” provided by CTCP to facilitate the writing process. A model that can be used to create an evaluation activity description with all of its component parts is also included. Information regarding conducting these evaluation activities is provided in the section Conducting Evaluation as are links to how-to resources.

  1. Education/Participant Survey [E/PS]
  2. Evaluation Report
  3. Focus Group [FG]
  4. Key Informant Interview [KII]
  5. Media Activity Record [MAR]
  6. Observation
  7. Policy Record [PRR]
  8. Public Opinion Poll/Public Intercept Survey [POP/PIS]
  9. Tobacco Purchase Survey [TPS, YATPS, YAEPS]
  10. Other - Asset Mapping
  11. Other - Consumer/Materials Testing
  12. Other - Membership/Participation – coalition intake, attendance/participation, diversity matrix  
  13. Other - Photovoice
  14. Other - Web/Google Analytics
  15. Other - Not Listed

CHOOSING EVALUATION ACTIVITIES

The previous sections on evaluation planning have explained the role evaluation should play in California Tobacco Control to support and inform the next steps of the workplan. But how do all of the pieces fit together and make sense? This next section goes through a set of questions that will help projects figure out what kind of evaluation (and corresponding intervention) activities the plan might need to outline in the scope of work.

HELPFUL PROMPTS FOR DEVELOPING PLAN COMPONENTS 

  1. Starting with the objective and plan type, what steps need to happen to get decision makers or the population of interest to take the desired action?
  • What is the issue? Document the scope of the problem
    • What information already exists? 
    • Is it current, accurate, local, and specific to the population of interest?
    • What information still needs to be collected/examined? These will be the components of the project’s evaluation activities.
  • Who needs to hear/see the issue? (Educate key players/actors
    • Learn how they frame/prioritize the issue and discover what type/amount/source of evidence will be convincing. Find out concerns of those with the power to create the change desired (e.g., elected officials or their staff; decision makers for voluntary policies) through key informant interviews with the decision makers. 
    • Identify and/or assess constituents, allies, opponents/critics, community thought leaders, decision makers, and gatekeepers. This happens during community-based strategic planning sessions, utilizing the Midwest Academy Strategy Chart as a guide to build the plan.
    • Explore the best way to frame the issue. Projects may need focus groups, public opinion polling, public intercept surveys, or key informant interviews with community leaders to explore and identify the best options to move forward.
  • How can projects build/mobilize a groundswell of support?
    • Engage the community and affected stakeholders. This will be done through a variety of intervention activities such as educational presentations, trainings, paid and earned media, collaborations with other organizations and interest groups, and community-led advocacy activities.
    • Utilize and support project assets (coalition, advisory board, campaign committees, volunteers, alliances) to carry out activities. Incorporate collaboration activities as needed into the plan. Ensure appropriate time is planned for relationship building, training, and collaboration. Include partners as responsible parties for scope of work activities.
  • How can projects make their case/exert leverage?
    • What forms of outreach/communication/public acts will spread the message? Based on the evaluation findings, engage assets to conduct information sharing, public education (often through presentations and town halls), tailored decision maker education (e.g., one-on-one meetings, presentations), paid and earned media campaign tactics, social media outreach, engagement tactics (e.g., petitioning, sending letters of support, or making phone calls to decision makers), and providing decision makers and their staff with necessary technical assistance support.
  1. Confirm and document the actions taken 
    • Build in evaluation activities such as observations, public opinion surveys, policy record review, and/or interviews that capture what happened and with what effect
  2. Share the path/process with contributors, stakeholders, and the broader tobacco control field
    • Each objective requires a written evaluation report at the end of the funding period for writing the report which summarizes the project’s efforts. These reports are commonly referred to as Final Evaluation Reports or Brief Evaluation Reports.
    • After a policy objective has been met or failed in a jurisdiction, such as at a mid-point of the funding period, a review process and report can help summarize lessons learned to be applied to future work in other jurisdictions.
    • Strongly consider building in modes for sharing (and celebrating/making sense of) evaluation results in real time throughout the workplan with all of these actors. 
  3. Document decisions made along the way. This will make it easier to write the evaluation narrative summary once the plan is fully fleshed out.

TYPES OF DATA 

According to the Centers for Disease Control and Prevention, “Evaluation findings should be used both to make decisions about program implementation and to improve program effectiveness.” To those ends, evaluation planning includes choosing the types of data to be collected. 

QUANTITATIVE VS. QUALITATIVE DATA 

Quantitative data is used to answer questions such as “how many” and “how often,” e.g., how many multi-unit housing complexes have a voluntary, smokefree policy. Quantitative data is usually collected through observation or survey. Data is analyzed by using descriptive statistics (frequencies and percentages) as well as inferential statistics to test whether the results are generalizable to the broader population. 

Qualitative data is used to gather in-depth and detailed perceptions, experiences, or opinions. Unlike quantitative data, qualitative data are not easily summarized into numbers. However, qualitative data can be used to add depth, detail and meaning which cannot be obtained from quantitative data alone. An example of a qualitative research method is interviews which generate qualitative data through the use of open-ended questions. In addition to words or text, qualitative data can also consist of photographs, videos, sound recordings, etc. Various techniques can be used to make sense of the data, such as content analysis or thematic analysis.

For more information on each of these data types, as well as resources on cleaning and analyzing quantitative and qualitative data, see: 

MIXED METHODS 

Because of their different natures (numeric vs. descriptive), qualitative and quantitative data are often interpreted and reported separately. However, there is value in looking at them together. The term “mixed methods” refers to an approach of using both quantitative and qualitative methods to answer evaluation questions, recognizing the need for both breadth AND depth, quantity, range, context, and motivations. 

The CDC notes, “Sometimes a single method is not sufficient to accurately measure an activity or outcome because the thing being measured is complex and/or the data method/source does not yield data reliable or accurate enough.” Employing more than one data collection method and/or obtaining information from more than one source can “reduce the chances of measurement and sampling bias as well as allow for the flexibility to tailor the mode to the differing needs of various segments of the population.” 

Rather than reporting survey results separately from key informant interview themes, the analysis would look at whether the two sources of information confirmed each other and where they diverged and where the qualitative findings could shed meaning on the quantitative results. For example, if support for smokefree multi-unit housing was fairly high among all priority populations except for one group, interview data could shed some light upon why that is.

BUILDING AN EVALUATION ACTIVITY 

When developing the workplan and choosing activities, it is helpful to go about it strategically. Start with the end goal of the objective in mind and then work backwards, building in and connecting the activities necessary to achieve the objective. 

One helpful tool for thinking through this is the End-Use Strategizing Worksheet. There is also a webinar that walks participants through an example of how to use this worksheet. The process should be employed by a team of collaborators, evaluators, project managers, staff, and community coalition members charged with developing workplans, activity parameters, data collection instruments, and intervention activities. 

A list of commonly used evaluation activities is available in the previous section of this guide. To facilitate the plan-writing process, CTCP provides sample activities known as wizards (writing templates) with typical wording for various activities. The activity wizards must be adapted to specific project goals, needs, and capacity. Activity parameters need to be tailored so they are appropriate for the intended population. Though the details and order may differ, each activity should cover:

  • Intended outcomes = purpose, topic/focus, audience/end-user
  • Parameters = sampling, population of interest, recruitment/training, mode, instrument, waves, logistics (timing, location, personnel)
  • Analysis and action plan = links to intervention activities or other strategies, explains how results will be used
  • Reciprocity = a dissemination plan to share results back with the population from whom data are collected

The following structure can be used for writing a new evaluation activity description:

To [PURPOSE: inform, improve, measure, confirm, monitor] [END-USER: policy makers, staff, coalition, store/MUH owners, general public, etc.] about [TOPIC/FOCUS: what you want to know, document, or accomplish] a [METHOD: survey, observation, interview, focus group, media record, policy record/document review, Photovoice, litter audit, describe other evaluation activity] will be conducted with [SAMPLE SIZE #-#] of [POPULATION OF INTEREST (unit of analysis, identifying characteristics, inclusion/exclusion criteria)]. A [SAMPLING METHOD: purposive, random, stratified, cluster, convenience, census] sample will be selected/recruited from [constituency (board of supervisors, city council, community leaders, store owners, chamber of commerce, church leaders, influencers, thought leaders, school leadership, general public, location type, etc.)] at/in/from [LOCATION as applicable]. A [MODE: paper and pen, online, mobile, verbal, visual, interview guide] INSTRUMENT and protocol will be developed or adapted [in consultation with TCEC or from another source] pilot tested, revised, and tailored for use with the intended population and cultures. This [pre, post, or pre and post] measurement will be conducted in in [#] WAVES before and after [intervention, e.g., policy implementation, educational campaign, community outreach, etc.] for [duration: # of minutes, rounds, groups]. [PERSONNEL: data collector pool e.g., staff, coalition, youth, volunteers] will be recruited, trained, and assessed for readiness [e.g., practice, knowledge tests, monitoring] to administer instrument protocols. Data will be ANALYZED using [descriptive statistics, inferential statistics, content analysis] and results will be disseminated with [STAKEHOLDERS: specific staff, staff in general, coalition, policy makers, general public, etc.] in appropriate [FORMATS: presentation, fact sheet, summary report, media release, podcast, etc.]. Lessons learned will be used to [ACTION PLAN: measure change over time, support or inform next steps, link to interventions as appropriate, etc.].

Example: To improve staff understanding of community knowledge, attitudes, and perceptions regarding smokefree parks, a public intercept survey will be conducted with a convenience sample of 150-300 visitors of parks in Savannah using a mobile device. The survey and protocol from the previous workplan with be used. This pre/post measurement will be conducted in two waves before and one wave after smokefree parks policy implementation. Surveys will be conducted by project staff who will be trained and assessed for readiness with during trainings to administer instrument protocols. Data will be analyzed using descriptive statistics and inferential statistics to document support/opposition to potential policy strategies, knowledge, awareness, beliefs, and demographic information provided by survey participants. Results will be shared with program staff and other stakeholders such as the coalition and potential partners to inform next steps and improve interventions.

MORE CONSIDERATIONS FOR CHOOSING AND BUILDING EVALUATION ACTIVITIES

After being submitted, every proposed workplan is reviewed and scored by a team of program, budget, and evaluation specialists. They are looking to see that each plan consists of a rational series of activities that work toward achieving the objective. The logic of the number, sample size, and use of evaluation activities is a principal indication of the overall quality of the plan. So, it is important to get this right!

An effective workplan relies upon not only the appropriate mix of evaluation activities, but also the proper number, scope, timing, and size of activities. Every evaluation decision should be driven by utility. Reviewers are looking for the answer to these questions: What purpose will it serve? How will the data collected inform or support interconnected activities and progress toward the objective overall? 

It is important to avoid the mistake of thinking that cramming in more evaluation activities makes the plan more rigorous or fund worthy. Each activity needs to add value and have a purpose. Otherwise, it is just busy work.

So how can a plan writer tell if an activity is necessary? Employ end-use strategizing thinking: What information or evidence would be helpful to know and have to achieve the objective? Typically, it is crucial to conduct formative research first through policy record review, media activity record review, key informant interviews with policy makers, and observation data. Then, collect and measure evidence of the issue and of support to show decision makers, increase public knowledge and awareness of the issue and potential solutions, measuring readiness to take action, and timing of intervention and evaluation activities.

Also consider whether it would be advantageous to be able to collect similar data points in waves over time or from multiple sources/types (e.g., qualitative AND quantitative) in order to triangulate or confirm the validity of findings.

Sample Sizes 

When determining an effective sample size for an evaluation activity, think about what threshold will be compelling, convincing, and representative enough to inform and/or persuade critical audiences. For example, will survey results collected from just 100 city residents persuade a city council member that her constituents support a particular policy direction? In a small community, it might be, but in a large community, probably not. 

A sample size calculator will show the ideal size paired with confidence intervals. [See https://www.surveymonkey.com/mp/sample-size-calculator/]. With a smaller or more contained population, aim to get 100% (called a census of the population, rather than a sample or portion of it). For example:

  • Coalition Satisfaction Survey must include a census of all members
  • Education/Participant Survey for data collectors must include a census of all trainees
  • Key Informant Interviews should include a minimum of 5 or a census 
  • Young Adult Tobacco Purchase Survey should include a minimum of 25 stores or a census of all tobacco retailers
  • Focus Groups should include a minimum of 2 groups with 6-10 adults or 5-7 youth
  • Public Opinion Survey will likely have 100-385 respondents, depending on the type of analysis planned.  For example, if comparisons by jurisdictions, zip codes, or other demographics are planned, a project may want 100-385 responses from each group.

Also remember to specify the breakdown by jurisdiction in the activity description. For example, state, “The sample size will be 25-35 in the 2 jurisdictions and 2 comparison communities for a total of 100-140.” 

To make data compelling for a policy maker, it may be important to survey residents of that policy maker’s district. Instead of a sample size for a city of 100,000, it might be a sample size for a district of 20,000. Getting more specific about sample sizes and audiences will get better quality data and easier-to-manage data collection efforts. 

Quantitative data collection activities such as surveys or observations generally aim for some level of representativeness meaning that who or whatever is part of the sample includes the breadth and depth of the makeup and characteristics of the whole population of people or things being measured.

Qualitative data collection activities do not typically aim for representativeness and extrapolation of results to the larger population. The samples are often selected purposely rather than randomly and can therefore be fairly small. 

Documenting the rationale for sampling choices and incorporating these details into the activity description can help staff teams recall the sampling decisions when it comes time to actually carry out the evaluation activity. For example, “In order to ensure that the primarily Hispanic/Latino voices are included in the evaluation, 4-5 focus group discussions will be conducted with approximately 6-10 people in each group. Two of the groups will comprise of (female) Latina tenants, two groups of (male) Latino tenants and one group containing a mix of Latinx tenants.”

There are a variety of different sampling strategies to choose from: simple random, stratified random, cluster, purposive, convenience. Be sure that the strategy is a good fit for the evaluation activity and the sample composition. In addition, the plan should define the inclusion and exclusion criteria which specifies the characteristics of what should be included or not included in the sample.

For more details on sample sizes, sampling method, and other sampling decisions, see https://tobaccoeval.ucdavis.edu/parameters-and-sampling 

Budget Allocations for Evaluation 

The number and type of evaluation activities and their sample sizes will be dependent upon how much staff, resources, and time the project will have available to conduct the activities. Think about the value and cost that different types of activities will require. Key informant interviews can be conducted by one or two people, while store or park observations may require teams of data collectors and data collector trainings. 

Some activities benefit from the more costly but well-honed skills of an evaluator or facilitator; others can be carried out by trained volunteers and a team leader who monitors quality control throughout the data collection process.

Percent Deliverables for Evaluation 

When assigning the percent deliverable for an activity, estimate the amount of staff or contractor time it will take to plan, conduct, analyze, and write up results, given the data collection method, location, and sample size. 

Do a reality check by converting that deliverable into a dollar figure (x percent of the funding amount). Does the amount seem reasonable for the work output? For example, a cost of $40,000 to conduct 4-5 key informant interviews will be sure to raise a red flag for plan reviewers. Experienced evaluators are likely to have more realistic projections and will know how long certain tasks will take to complete. Also, the Policy Section of the CTCP Administrative and Policy Manual has a section on, “Average Hours for Completion of Evaluation Activities” that can help estimate time and resources required for evaluation activities.

SETTING UP A TIMELINE FOR EVALUATION ACTIVITIES

Once the evaluation plan type, evaluation design, process evaluation and Outcome Evaluation activities have been determined, a timeline for the evaluation activities must be created. This section describes why a timeline is important and introduces a GANTT Chart planning tool. This type of chart can be used as a program planning, monitoring and communication tool to ensure activities are completed and evaluation activities are used to inform the project. This section focuses on its use during the application phase, when the scope of work is being developed (pre-funding approval).

WHY IS A TIMELINE IMPORTANT? 

The purpose of pre-intervention evaluation activities is to inform the project’s intervention strategies. This is especially important regarding objectives that are focused on policy change.

The process of determining the timeline for evaluation activities involves sequencing key intervention activities with the evaluation activities and showing their dependencies so that the information gathered through these evaluation activities can inform the project’s next steps. 

WHAT IS A GANTT CHART?

A GANTT Chart is a type of bar chart that illustrates a project schedule by listing the task to be performed on the vertical axis and the time intervals on the horizontal axis. It is a high-level chart that does not show all of the implementation-related details or logistics but just enough detail to allow coordination of activities. 

WHO SHOULD TAKE PART IN THE PROCESS? 

During the initial development of the application, setting up the timeline for activities should be conducted with the person(s) that are developing the intervention and evaluation activities, typically the project director and evaluator. Once the project is funded, all project staff should be included in the process of revisiting and monitoring the activity timeline and sequencing of activities on at least an annual basis.

HOW TO DEVELOP THE TIMELINE 

The process of sequencing intervention and evaluation activities can be done in-person using wall charts and sticky notes or with online platforms and techniques, such as Google docs, Zoom whiteboards, or other tools. One of the easiest methods to use is a Word document, which can be shared via any online meeting platform through screen sharing.

Not every intervention activity, such as logging into the CTCP Partners website on a weekly basis, needs to be included in the GANTT Chart. Only include key intervention activities such as educational activities, media, and meetings with policy makers. On the other hand, all evaluation activities should be included on the chart and sometimes intermediate steps for those evaluation activities such as instrument development, pilot testing, data collector training, completion of data collection, participatory data analysis, data equity, and dissemination, can be included.

The timeline built will need to consider internal factors, such as the amount of time staff can devote to data collection activities and the amount of time it will take to reach a sample size, as well as external factors, like the election cycles for policymakers the project is working to educate. One election cycle can be important to consider when building a realistic timeline to follow.

Preparation for the sequencing will depend upon how it will be done, meaning in person or online. For example, if the process will be conducted online, each of the evaluation activities and key intervention activities can be listed on the vertical axis in advance. Developing the timing and sequencing can be done through dialogue, examining each activity, and setting a date by which that milestone will be achieved.

Once the activities have initially been sequenced, make sure the timing of intervention and evaluation activities works so that needed information from evaluation activities will be available when needed. Also think about potential bottlenecks, availability of staff or volunteers, and community events or other environmental issues that could affect the completion of tasks. 

If the project’s plan type is “adopt and implement,” policy adoption should be timed to occur before the last year of the plan so that there is a year to conduct implementation activities. Similarly, if the objective includes multiple targets for policy adoption and implementation, these processes should be sequenced accordingly. 

The results of the planning process can be used to determine the milestone dates related to each evaluation activity, which can be populated into the Online Tobacco Information System (OTIS) and facilitate ongoing planning, monitoring and communication. However, if the objective contains a goal of adopting and/or implementing more than one policy, OTIS timelines may span the length of the contract, meaning building the annual timeline is an essential step to ensuring a project budgets its time for each policy campaign appropriately.

Note that this type of planning tool can also be used throughout the project’s workplan for ongoing planning and monitoring. Further information is provided under Timeline of Activities. Additional resources can be found at:

https://tobaccoeval.ucdavis.edu/news/learning-how-make-most-your-evaluation-team

https://tobaccoeval.ucdavis.edu/evaluation-plan 

OTHER ASPECTS OF THE EVALUATION PLAN

Beyond the objective, intervention and evaluation activities, there are other required aspects of each evaluation plan. See the procurement for details. Each objective requires the following:

  • Evaluation narrative summary describes the plan in a coherent order, connecting how evaluation activities will be used to improve interventions and strategies. Describe what is expected to change and how the change will be measured. State the plan type, evaluation design, if Outcome Evaluation measures will be collected, and how findings will be shared and disseminated with others. From this section, readers should be able to understand the logic and rationale for how evaluation efforts will advance the objective. 
  • Submitting the workplan in OTIS is required for most proposals. Additional communication and revisions are also completed in OTIS.
  • For most funding opportunities, a qualified evaluator must certify that they spent at least four hours developing and or consulting on the plan. The quality of descriptions and combination of activities will make it obvious that a qualified evaluator helped write or reviewed the plan. 
  • To confirm evaluator involvement in the plan development, there is a multi-step process where the project must select the evaluator in an OTIS drop down menu of the application. An email is then sent to the local program evaluator to complete separate steps to login to OTIS and potentially the local program evaluator directory. There are a series of questions for the evaluator to answer and then certify that the information is correct. Be sure to allow plenty of time for this process to take place. Do not just wait until the last minute to do this step. The application cannot be submitted until this step is completed.

Because the evaluator who develops or reviews the plan is not always the evaluator who ends up carrying out the evaluation plan, it is important that the rationale for plan components be clear and documented in some way. Explain the thinking behind choices made and the internal or external conditions that may have affected the decisions about data collection activity choices, sampling strategies or sizes, and intended uses of the activities.

CHECKLIST FOR EVALUATION PLANNING

The evaluation planning process consists of many steps. Choosing an evaluation plan type, an evaluation design, and process and outcome activities are covered in other sections of this guide. This section highlights key planning “tips” to ensure that the evaluation plan meets CTCP requirements. It also provides an overview of what the evaluation plan should include.

These tips should be reviewed before the plan is created and again afterward to ensure that all necessary components are included. 

Logical and Easy to Follow:

  • The evaluation plan will adequately measure intended outcomes.
  • There is enough detail so that project staff know how to implement the plan and why. The plan makes sense to the reader.
  • Plan types, process and Outcome measures, and combinations of activities are a good fit for the objectives and intervention plan, demonstrating that a qualified evaluator spent the number of hours needed reviewing the scope of work (please see the request for application/guidelines for the procurement for minimum number of review hours required). The combination of activities makes it obvious that a qualified evaluator helped write the plan or at least reviewed the plan by providing least 4 hours of input if required by the funding application guidelines.

Utility:

  • Each activity adequately describes the purpose, topic or focus of data collection, the instrument to be used, how it will be administered (paper, online, mobile, etc.) 
  • Each activity adequately describes how data will be analyzed and how results will be used to inform intervention activities and strategies.

Logistics:

  • Each activity includes an appropriate sample size, sampling method, description of who or what is being sampled and recruitment of participants.
  • Each activity states where, when, how, why and by whom data collection will occur, the number of waves, in what reporting period(s), and plans for training data collectors (if applicable). 

Cultural Competence/Tailoring to the Community:

  • The plan demonstrates involvement of a coalition or other community members, consultation with statewide technical assistance providers or coordinating centers, awareness of target audiences, and shows how the project intends to reach them to involve communities in the work.
  • The plan includes tailoring activities and findings for target audiences (e.g., translations, timing, location, mode, personnel, etc.)
  • Considers building in the means to triangulate data by collecting data in several different ways or from different sources. This can help minimize sampling or measurement bias AND allow for varying modes to be used with different segments of the population of interest.

Reporting and Deliverables:

  • Review specific funding application guidelines to ensure progress and evaluation reporting requirements are met. 
  • Reporting activities demonstrate reciprocity by sharing findings with those who helped collected and provide the data. Also states how findings will be shared with the community, stakeholders, decision makers, and broader field of tobacco control.
  • Each activity lists appropriate deliverables including data collection instrument, summary report or analysis summary as appropriate.
  • The minimum or maximum percent deliverables specified in the funding opportunity are allocated to the evaluation plan. 
  • The evaluation plan has enough activities to support the formative, process, and outcome stages of the objective. 
  • Each activity lists appropriate responsible parties (External Evaluator, Internal Evaluation Project Manager, Health Educator, Project Director, coalition, etc.) Please note that the evaluator does not always have to be listed as the responsible party for all evaluation activities.

CONDUCTING EVALUATION

DATA COLLECTION ACTIVITIES DEFINED

This section of the guide focuses on information for conducting evaluation activities. A list of the most commonly used evaluation activities in California Tobacco Control is provided, as well as a description of the activity, common uses, and best practices. Links to supplemental resources are included with each activity and additional guidance is available at https://tobaccoeval.ucdavis.edu

All projects are encouraged to utilize these resources and incorporate best practices and advances in the evaluation field.  

COMMONLY USED EVALUATION ACTIVITIES  

  1. Education/Participant Survey [E/PS]
  2. Evaluation Report
  3. Focus Group [FG]
  4. Key Informant Interview [KII]
  5. Media Activity Record [MAR]
  6. Observation
  7. Policy Record Review [PRR]
  8. Public Opinion Survey [POP]
  9. Tobacco Purchase Survey [TPS, YATPS, YAEPS]
  10. Other - Asset Mapping
  11. Other - Consumer/Materials Testing
  12. Other - Membership/Participation – coalition intake, attendance/participation, diversity matrix 
  13. Other - Photovoice
  14. Other - Web/Google Analytics
  15. Other – Not Listed

EDUCATION/PARTICIPANT SURVEY (E/PS) 

AboutEducation/Participant Surveys are a quantitative data collection method consisting of a series of questions designed to measure participant experience (e.g., post-training knowledge, confidence, coalition member satisfaction, data collector readiness, post-presentation feedback, etc.). Other names for Education/Participant Surveys are post-training survey/assessment (PTS, PTA), coalition satisfaction survey (CSS), and data collection training (DCT) surveys. Typically, Education/Participant Surveys are a process evaluation activity.
Common UsesEducation/Participant Surveys help indicate what participants thought about the experience (e.g., a training, a specific event, a series of meetings, or coalition functioning) and to determine what is working and what could be improved. This information is primarily used to improve the project’s efforts.
Best Practice

In general, use a consistent survey instrument that asks similar questions over time to help gather feedback on presentation styles and needs. Return participants also become familiar with the pattern of being asked about their experiences and how the presentation or training can be improved.

Education/Participant Surveys may not be required for every training or meeting and does not require a report every time. The most important use is to review feedback after the training and incorporate the feedback immediately for the next training. 

A retrospective pre-test has become a popular Education/Participant Survey method because it captures pre/post information at the same time. 

For data collector trainings, it is also important to provide opportunities for practice and observe a data collector’s readiness and comfort to follow the protocols. 

Resources

EVALUATION REPORT 

For most projects, a separate evaluation report is required for each objective. Please see the Reporting Section for more information.

FOCUS GROUPS (FG) 

 

AboutFocus Groups are a qualitative data collection method designed to draw out participants’ beliefs, experiences, attitudes, and feelings on a topic through an organized or structured discussion. Typically, Focus Groups are a process evaluation activity.
Common UsesFocus Groups are useful to understand the RANGE of choices or behaviors that are prevalent, WHY people feel a certain way, HOW things work, the NORMS of a particular population/ community, or REACTIONS to strategies/campaign messaging or materials. Note that a description of consumer-testing, specifically, is provided in a separate section below.
Best PracticeFocus Groups are conducted in a series, not as a single Focus Group, comparing across and within groups for themes. Typically, 2-6 groups are conducted with multiple groups of 6-10 adults or 5-7 youth, depending upon the topic and the target audience. At least two or more groups must be convened.
Resources
  • https://tobaccoeval.ucdavis.edu/sites/g/files/dgvnsk5301/files/inline-files/2018-05-17-TT4-Focus%20Group%20Interviews.pdf 
  • Krueger, Richard, and Mary Anne Casey. “Focus Group Interviewing,” in Handbook of Practical Program Evaluation, 3rd ed., Joseph Wholey, Harry Hatry and Kathryn Newcomer, eds., San Francisco: Wiley, 2010, pp. 378-403.
  • Krueger, Richard, and Mary Anne Casey. Focus Groups: A Practical Guide for Applied Research. 4th ed. Thousand Oaks: SAGE, 2009.
  • Tracy, Sarah. “The Focus Group Interview,” in Qualitative Research Methods: Collecting Evidence, Crafting Analysis, Communicating Impact, San Francisco: Wiley, 2013, pp. 167-173.

KEY INFORMANT INTERVIEW [KII] 

AboutKey Informant Interviews are a qualitative data collection method consisting of in-depth interviews with people who know what is going on in the community or are knowledgeable about the issue or topic. Typically, Key Informant Interviews are a process evaluation activity.
Common UsesKey Informant Interviews are used to collect information from a wide range of well-connected and informed people who have firsthand knowledge about the community/issue. These experts can provide insight on the environment and inform intervention strategies, how things work, who projects should be talking to, the best options or timing to move forward, the meaning or context of other findings, or how to effectively persuade decision makers. Key informant interviews can also be used to monitor community engagement and stakeholder buy-in. Key Informant Interviews should not be used for debriefing with project staff. 
Best Practice

Key informant interviews should be conducted by a skilled interviewer. 

A separate interview guide may need to be tailored and structured differently for each type of key informant. 

When the activity investigates different things at different times, such as background policy research and then later after policy implementation then it should be two separate activities rather than the same activity with multiple waves. In addition, it may make sense to write a separate report by key informant type rather than reporting on all key informants together.

Consider coordinating interviews with other tobacco control projects in the region so that policymakers are not bombarded by multiple people requesting interviews. Strategize when it makes the most sense to reach out to officials and for what purpose. Early meetings may be more of a general way to start building a rapport with the informant and ask about their priorities; later on, when the campaign is farther along the time is right to ask about their position on the issue.

Any visit or interview with elected officials cannot include lobbying. 


 

Resources

 

MEDIA ACTIVITY RECORD [MAR] 

AboutA Media Activity Record is a data collection method that monitors the news to look at the number, type, content, sentiment, and response to media coverage on tobacco control-related topics over a specified period of time. Typically, Media Activity Records are a process evaluation activity.
Common UsesMedia Activity Records are used to assess how tobacco control issues are being framed, the accuracy and neutrality of facts presented, and the level of public support or opposition reflected in the sentiment of media pieces. It also measures the reach of particular messaging generated by projects/coalitions to educate the public with media releases or letters to the editor. This information can be used to inform the timing and direction of intervention activities and help guide local efforts to support educational campaigns and policy advocacy.
Best Practice

The Media Activity Record (MAR) is most useful to the intervention if news is monitored regularly, at least once per week in order to respond to opportunities to engage. The responsible party (the main person(s) collecting data) should be part of the team that crafts the project’s media messages, responds to media inquiries and misinformation, and has the time to continually monitor and swiftly respond to media coverage e.g., communications/media specialist or coalition/community engagement coordinator. 

A MAR should be conducted in a somewhat similar fashion for social media, although monitoring and responding should happen on a daily basis rather than a weekly one. Because the metrics, frequency and platforms for social media activity differs from more traditional media, it typically makes more sense to use separate tracking forms to document search results for each. Paid media should be recorded and reported on a form designated for that purpose which can be found on Partners.

Resources

OBSERVATION

AboutObservation is a direct method of collecting data or information through, as the name implies, observing. There are different types of observation that are open-ended, where the data collector records everything observed in a situation or during an event. However, Observation as a data collection method in California Tobacco Control projects is typically structured using specific variables and according to a pre-defined schedule) and can be used as a process and/or Outcome Evaluation activity. 
Common UsesCommon uses of observations in California Tobacco Control measure the presence of tobacco litter, smoking/vaping behavior, location and/or proximity of smoking/vaping to “no smoking” areas/children’s play areas/etc., the presence/location/size of “no smoking” signage, the presence of ashtrays or ashcans, the availability and marketing of tobacco/alcohol/food products in stores, the visibility of STAKE Act or tobacco policy signage and other types of observable items. Observations can also be performed as part of trainings to confirm that learning has been understood correctly, data collectors are adhering to protocols, and/or trainees apply correct messaging, as examples.
Best Practice

An Observation can be conducted in one wave (pre or post) or in multiple waves (pre/post or annually), depending upon the needs of the project and the desired change. For this reason, it is often crucial to identify and document the vantage points from which data collectors observe in each wave so that the process can be replicated in future rounds.

Because observations can be labor-intensive, sometimes community youth and/or adult volunteers are recruited to assist with the observations. Volunteers should be trained on data collection procedures as well as monitored in the field to in order to provide support as needed and ensure high quality data.

Resources

POLICY RECORD REVIEW [PRR]

AboutA Policy Record Review is a qualitative data collection method that uses information from secondhand or thirdhand sources, e.g., documents or web content to investigate and/or track decision maker stance surrounding tobacco control topics. Policy Record Reviews typically include four components: 1) pre-intervention research, 2) meeting observation, 3) documenting the policy adoption process, and 4) comparing an adopted policy to a model policy. Policy Record Reviews are typically a process evaluation activity in California Tobacco Control.
Common UsesThe purpose/utility of Policy Record Reviews depends upon which stage of the intervention/evaluation plan they are used. Pre-intervention research is used to assess the extent of tobacco-related policies in the jurisdiction, the voting records of currently seated policymakers, as well as the level of support for tobacco-related policy among a decision-making body. A Policy Record Review in the form of a meeting observation tool can be used to document issues raised for or against tobacco control-related issues, identify possible champions, and help frame messaging to promote policy adoption and counter opposition. A Policy Record Review can also be used to document the policy adoption process, and to confirm the presence of key provisions in the policy language by comparing an adopted policy to a model policy.
Best Practice

Policy Record Review research should be done at the beginning of a new objective or when selecting a new targeted jurisdiction. This research is best conducted prior to a Midwest Academy Strategy Chart session so that participants are informed of what policies exist and the voting records of current policymakers on related issues. This information can help identify potential champions and opponents, and can then be used to strategize potential opportunities, threats, allies, and opponents. During the campaign, meeting agendas and minutes should also be monitored regularly so that the project can be alerted to any new opportunities or threats to the campaign. 

The continuous monitoring means that during an active campaign period, a Policy Record Review for each jurisdiction/policy worked on may need to be updated and submitted with each progress report.

Resources

For guidance about how to conduct a Policy Record Review, see these resources and tracking tools:

PUBLIC OPINION SURVEY [POS] 

AboutA Public Opinion Survey is a data collection method used to gauge the knowledge or sentiment of a community about an issue, e.g., knowledge or attitudes about minimum pack size. The survey can be used as a process and/or Outcome Evaluation measure. The data collection instrument typically includes demographic, geographic, and short yes/no or multiple-choice questions.
Common UsesA Public Opinion Survey can be used at any point in the intervention/evaluation timeline: at baseline to determine whether additional education is needed by measuring community knowledge or awareness of an issue; at mid-point to document community support for policy to persuade decision makers to take action; or before and after the intervention to measure change in attitudes over time.
Best Practice

A Public Opinion Survey can be done via telephone, mail, online, and/or in-person. The types of data collectors needed depend upon how the POS is being conducted. With that in mind, telephone or in-person data collectors could be professional call centers, trained phone interviewers, trained youth and adult volunteers, or project staff. 

The quality of the data greatly depends on the sample of people from which information is collected. Depending upon the population of interest and what is being measured, the sample could be selected via a random sample, simple random sample, stratified sample, cluster sample or convenience sample. The sampling method chosen should guarantee that it is representative of the population from which information is desired. A truly random sample ensures that the voices of the entire community is heard. In order to be able to understand how various segments of the population may think, it is important to collect demographic information such as age, race/ethnicity, gender identity as well as other factors such as language spoken, socioeconomic status, zip code or neighborhood, smoking status, etc.

So that the voices of all segments of the community are being heard, be sure that the survey will be comfortably understood by participants with differing levels of education or language comprehension. Test the survey for readability (comprehension) and translate it into the languages of the communities of interest. Also be sure to strategically think through where, when, and how the survey will be administered survey so that the voices included are from various parts of the community.

Resources
  • Public Opinion Surveys https://breeze.ucdavis.edu/pos/ 
  • Journey of a Survey: roadmap to usable data - https://tobaccoeval.ucdavis.edu/files/ReadyTalk/Journey%20of%20Survey/lib/ playback.html 
  • Planning Observations and Public Opinion Surveys https://tobaccoeval.ucdavis.edu/files/ReadyTalk/Obs%20_%20POS/lib/ playback.html 
  • https://tobaccoeval.ucdavis.edu/sites/g/files/dgvnsk5301/files/media/documents/POP%20Guidance%20Resource%20Final%206.15.22_0.docx

TOBACCO PURCHASE SURVEY [TPS, YATPS, YAEPS]

AboutThe Tobacco Purchase Survey, also called the Young Adult Tobacco Purchase Survey or the Youth Adult Electronic Purchase Survey, is a research method used to gather information in tobacco retail stores about the extent of illegal sales of tobacco to underage persons. The TPS can include two components: 1) the purchase attempt, and 2) an observation of store characteristics and product availability (which can be done simultaneously when there are two data collectors). This type of data collection method can be used as a process and/or Outcome Evaluation measure and evaluate the change in underage sales rate over time.
Common UsesThe Tobacco Purchase Survey can determine the extent of illegal sales of tobacco to underage persons and help enlist public and policymaker support for tobacco retail license policies. It can also measure the effectiveness of implementation of retail objectives and identify educational needs of retailers, policymakers, and the general public. This type of data collection cannot be paired with enforcement measures; it can only be used for education and planning.
Best Practice

A Tobacco Purchase Survey can be conducted pre-intervention/pre-policy adoption and/or after policy implementation. Typically, Tobacco Purchase Surveys are conducted in one or two waves (pre-intervention and/or post-policy adoption). Statewide protocols and options are available at 

https://www.tcspartners.org/Files.cfm?FilesID=3349. Other instruments that projects have used can be found in TCEC’s instrument database at https://tobaccoeval.ucdavis.edu/searchDCI. The safety of the data collectors is paramount. To that end, data collector trainings should include instruction on safety protocols for potentially negative or even hostile interactions, as well as roleplaying and practice.

Resources

Guidance, sample YATPS form and additional resources https://www.tcspartners.org/Campaigns/CommunityEngagement/LawEnforcement.cfm

OTHER – ASSET MAPPING

AboutAsset Mapping is a collaborative exercise that helps create a “map” of the assets available within a coalition by identifying the existing capacity and skills of group members, as well as needs, and to develop an action plan to address the identified needs. Asset Mapping with a coalition is sometimes confused with developing an asset map for a community. However, they are two very different approaches. This type of data collection is a process measure.
Common UsesEvery group of people/coalition has its own unique set of assets – both tangible and intangible – to call upon. The needs of any group are directly related to what the group is trying to accomplish. The success of a coalition’s efforts is dependent upon the combination of skills (e.g., public speaking), resources (e.g., a meeting room), and relationships (e.g., personal friends with a policymaker). The Asset Mapping process helps a group identify what it has and what it needs to effectively achieve the project’s objective(s).
Best PracticeAsset Mapping should be conducted at the formation of the coalition, or in the first six months of the program, whichever comes first. It should also be conducted ahead of the Midwest Academy Strategy Chart (MASC) to differentiate between assets of the coalition and the focus of the MASC, i.e., targets and tactics for the policy campaign. Although the Asset Map can be conducted in multiple waves, as members leave or join the coalition or staff turnover, only one wave is needed in the first six months of the program. 
Resources

For a Coalition Asset Mapping Protocol, see: 

For resources on community asset mapping, see:

OTHER – CONSUMER/MATERIALS TESTING

AboutConsumer/Materials Testing is a qualitative data collection method that is used to test newly developed materials or media pieces in terms of look, feel, content, language, approach, and action steps before they are released to a specific audience or to the general public. Consumer/Materials Testing is a process evaluation activity.
Common UsesTypically, any materials that are for public consumption benefit from consumer testing such as media (digital, video, radio or print media/advertisements); educational materials (infographic, postcard, fact sheet, brochure, flyer, booklet, bookmark, fact card/sheet, information kit/packet, manual, poster, sign, sticker, wallet card); recruitment and orientation materials (application/orientation packet, selection process); presentations
Best PracticeConsumer/Materials Testing should be conducted once the material has been developed but before it gets released for use and can be conducted in person, live online, or via an online survey. Consumer/Materials Testing is often done in one wave of at least one testing group. If there are divergent opinions regarding the materials and/or significant changes are made to the materials based on the feedback from a group, a second session may be needed. A second session is still considered part of one wave.
Resources

OTHER – MEMBERSHIP/PARTICIPATION TRACKING

AboutA Membership/Participation Tracking evaluation activity tracks the number and type of activities in which participants take part. It informs the project about patterns in participation including active/inactive status, participant interests, conflicts, and other notable characteristics to help project staff improve participation experiences. This type of data collection method is typically used as a process measure in California Tobacco Control programs and can be used to assess changes in participation over time.
Common UsesA Membership/Participation log measures membership, attendance, and participation of coalition members. Participation logs can help projects track which activities are more popular than others. When projects offer trainings or events, knowing how many people and who specifically attended can provide data needed for planning, monitoring, and compliance.
Best PracticeThis activity can also include the use of coalition member intake, to match member interest with specific coalition activities, as well as a diversity matrix, to ensure the composition of the coalition matches the demographic profile of the jurisdiction. Because analysis is typically conducted annually to measure change over time, the number of waves of Membership/Participation Activity correspond with the number of years in the project's scope of work.

OTHER – PHOTOVOICE

AboutPhotovoice is a specific photographic technique used as a participatory research method to collect qualitative data in the form of photographs. In California Tobacco Control, Photovoice is most often used as a process evaluation activity. 
Common UsesThe purpose of Photovoice is to record visually a community’s strengths and needs, to generate dialogue by sharing the images, and to reach decision makers through the community “needs” captured in the imagery. Photovoice is often used in the beginning of a campaign to measure the scope of the problem and to engage community members. For that reason, only one wave is usually conducted. Photovoice is often used to engage community members particularly those marginalized due to language, class, ability, age, geography, gender, etc., empowering them as potential catalysts for change in their community
Best PracticePhotovoice is not simply an activity where pictures are taken. It is a specific participatory needs assessment methodology developed by Drs. Wang and Burris (https://pubmed.ncbi.nlm.nih.gov/9158980/). The process includes trainings for participants, facilitators, and leaders, and engages participants in the process of selecting photographs that reflect the community’s assets and needs, telling stories about what the photos mean, and interpreting the issues, themes, and theories reflected in the images.
Resources

OTHER – WEB/SOCIAL MEDIA ANALYTICS

AboutWeb/Social Media Analytics provides information about website content, navigability, and site behavior of visitors. This type of data collection is typically used as a process measure to inform and improve utilization of a website or social media page.
Common Uses

Often analytics need to be informed by other activities the project or community may be engaged in at the time. For example, measuring analytics before, during, and after a paid media campaign launch that refers people to the project’s website is a way to measure the impact of the paid media campaign, in addition to the effectiveness of the web content itself.

For social media, projects can use analytics to determine which days of the week and times of day are the best to post content that will be seen by their intended audiences. For example, if geotargeting social media content around parks in the community to promote a letter-writing campaign for smokefree parks, analytics will help determine when local parks have the most visitors and whether those visitors are being responsive to the advertising in real time. 

Analytics should be monitored consistently, but no less than month-to-month. Keep in mind that some social media sites only log analytics for 3-4 months at a time, meaning projects will need to download them regularly to complete a summary report for their progress reports.

Best PracticeAnalytics are typically reviewed after a specific event, such as the launch of a new web page and/or on an ongoing basis. While monitoring may happen monthly, reporting may occur on a semi-annual or annual basis in multiple waves. 
Resources

OTHER – NOT LISTED

If there is another data collection method that does not quite fit one of the previously mentioned options, try to provide as much description as possible so that plan reviewers understand exactly what is being proposed and provide enough detail for staff to carry out the activity and use the results. See section on Choosing Evaluation Activities.

INSTRUMENT DEVELOPMENT AND PILOT TESTING

A list of the most commonly used evaluation activities in California Tobacco Control was provided in the previous section, as well as a description of the activity, common uses, and best practices. This section of the guide focuses on the process of developing and pilot-testing instruments, as well as training data collectors, when necessary.

DEVELOPING DATA COLLECTION INSTRUMENTS 

The data collection instrument is the means used to identify information sources and collect data (e.g., public intercept surveys, key informant interviews, focus groups). The instrument is often a survey, observation form, or interview guide, and should include specifications for how to use the instrument. Data is only as good as the instrument itself and the fidelity with which it was administered. For this reason, it is good practice to include data collection instruments as appendices when findings are reported so that readers can ascertain the validity of the results. Data collection instruments and protocols are required tracking measures that must be submitted with progress reports and relevant evaluation reports. 

Data collection instruments can come from a variety of sources. They can be developed by project staff, members of the coalition, the project’s evaluator, or from other organizations and individuals. The TCEC website contains a searchable data collection instrument database at https://tobaccoeval.ucdavis.edu/searchDCI that contains instruments previously used by local California Tobacco Control programs. 

Instruments can also be found in published journals or on the websites of advocacy organizations. In most instances (unless it is part of a coordinated statewide data collection effort), instruments should be adapted and tailored for the specific activity and intended audience and purpose. There are a number of resources on the TCEC website about developing or adapting instruments. See https://tobaccoeval.ucdavis.edu/instrument-development, and https://tobaccoeval.ucdavis.edu/end-use-strategizing. The team of associates at TCEC can also review and provide feedback on any instrument used for CTCP-funded data collection.

Instrument Essentials

During the instrument development process, it may be helpful to identify common evaluation uses and measures for each topic category.  Below is a list of instrument essentials that covers an overview of various activities:

PILOT TESTING

Whether utilizing instruments in the database or developing instruments from scratch, a best practice is to test the instrument before launching data collection to ensure that respondents understand what is being asked by each question or prompt and that the instrument actually collects the type of information needed. It may be necessary to pilot test the survey when revising for different communities. 

Pilot testing also helps gauge how long it may take to complete the activity. Including that time in an introduction when collecting data helps potential respondents decide if they want to participate in the activity or not. 

The key to pilot testing is to conduct a test run with participants who are similar to but not part of the sample. Be sure to include people who represent the range of diversity within the sample to make sure that everyone involved will understand the survey in the same way. 

When language, education level, acculturation, or other aspects of culture makes using a single data collection instrument with everyone in the sample less feasible, an instrument will need to be tailored or translated for each specific audience. To ensure that the meaning of all the versions are the same, it is a good idea to have a bilingual helper back translate the translated versions back into the original language to see if they equate into the same thing. 

There are a number of ways to test instruments. More details are available at: 

TRAINING DATA COLLECTORS

In addition to a well-designed data collection instrument, training your data collection team to uniformly conduct the evaluation activity is arguably THE most important thing you can do to ensure high quality data. This section will cover what a data collector training should consist of.

A good training consists of more than just looking at each item on the tool and asking if your team has any questions. The training needs to create a shared understanding of what is being asked or observed, how to collect and record the responses or observations, and how to handle questions or potentially problematic issues. It should also assess the capabilities and readiness of each team member to collect data accurately.

Essential Training Components

  • Talk about the purpose of the evaluation activity, how the data will be used, and the importance of collecting quality data. Emphasize the goal of having everyone implementing the survey or observation in exactly the same way.
  • Discuss the sampling method that will be used to determine who/which locations will be included in the sample.
  • Go over the data collection protocol (instructions) which details how to approach/ recruit participants or select observation vantage points; how much time to spend and how frequently to repeat; what to include or exclude, what to do when questions or difficulties arise, etc.
  • Read through the entire instrument with the group and explain the meaning and purpose of each question. Talk about each answer choice—what it includes and does not include.
  • Describe how to record the answers or observations.
  • Then demonstrate how to administer the instrument. Breaking the content into manageable sections, show the team how to ask the questions, read the answers or what to measure/observe/record.
  • Provide ample time for data collectors to practice how to pronounce the words, pause for inflections or responses, handle ambiguous answers. Have them work in fishbowl settings, pairs, etc.
  • After each section, ask trainees what felt difficult, where they had questions, what they still have questions about. Discuss how to record answers that do not fit neatly into one category. Clarify when they should provide additional clarification to survey takers vs. when they should just repeat the question.
  • As pairs practice, walk around and watch and listen in. At the end, debrief with the group to clarify and reiterate any instructions or parts of the instrument that people had trouble with. Have everyone practice those sections again, perhaps with different scripts or scenarios.
  • Lastly, assess data collector readiness by having everyone observe facilitators (or strategically chosen participants) work their way through the data collection. Have everyone record the data accurately. At the end read the correct answers and see how many of the team had 100% accuracy.
  • Retrain as needed and practice again until the group reaches an acceptable rate of compliance. Note which team members are still having problems and watch them during field practice or re-assign them to other duties where they will not contaminate the data collected.

Best Practice 

  • Give plenty of time and opportunity for practice. Trainees should read and work through the entire survey or observation multiple times. Provide different observation scenarios or various scripts for mock survey participants for how to answer questions with vague responses or how to ask for explanations or exhibit difficult behavior. Use debriefings to discuss how to handle such situations, how to interpret vague responses or how to record or note ambiguous data. 
  • Set benchmarks for data collector reliability/readiness – e.g., practice until each member of the team achieves 95% accuracy. 
  • Before commencing actual data collection, field test the data collectors by practicing the survey or observation with participants or locations that are NOT part of the actual sample. This way, you can make any corrections and re-train as necessary without affecting the quality of real data.

Resources

For more information on training data collectors, see the TCEC webpage:

https://tobaccoeval.ucdavis.edu/data-collector-training

ONGOING PLANNING, MONITORING, AND COMMUNICATION

Setting up a timeline for evaluation activities was introduced earlier in this guide, to describe its use in the pre-funding approval phase and determine the broad sequencing and timing of intervention and evaluation activities. For a “how-to” introduction to GANTT Chart planning see Setting Up A Timeline for Evaluation Activities introduced earlier in this guide. This type of planning chart can also be used on a regular basis as a program planning, monitoring and communication tool, to ensure activities are completed and evaluation activities are used to inform the project. This section focuses on its use after project approval and during implementation.

One of the challenges with project implementation is ensuring coordination of activities throughout the workplan, which can be a 3-, 4- or 5-year plan, depending upon the funding period. The GANTT Chart style of timeline is an easy way to facilitate ongoing planning, communication, and coordination of activities regardless of the funding term. 

All project staff should be included in the development of the activity timeline and sequencing of activities on at least an annual basis. Implementation monitoring should occur on a regular basis during project meetings, which can occur on a monthly schedule or as project needs dictate. In this way, adjustments to the timeline can be made regularly.

ANALYSIS OF THE GANTT CHART TIMELINE

The process of determining the timeline for evaluation activities and sequencing key intervention activities with the evaluation activities, so that the information gathered through these evaluation activities can inform the project’s next steps, was described earlier in this guide. Below is a sample of what a 1-year plan might look like.

Once the activities have been sequenced, analysis of the GANTT chart should include making sure the timing of intervention vs. evaluation activities is appropriate for the project, as well as coordinating the timing of instrument development, with data collection, which is often conducted by project staff. Analysis of the GANTT Chart Timeline also includes identifying bottlenecks, community events or other environmental issues to factor into the timeline. 

Some typical issues that can be ameliorated during the analysis of the timeline include:

  • Bottlenecks: e.g., timing multiple evaluation activities to be completed at the end of the progress reporting periods in December and June.
  • Competing events: There may be events in the community that can compete with project implementation such as elections, holiday celebrations, or harsh weather. 
  • Competing activities: If the objective includes multiple targets for policy adoption and implementation, these processes should be sequenced accordingly.
  • Not allowing sufficient time for implementation: If the project’s plan type is “adopt and implement,” time policy adoption to occur before the last year of the plan so that there is a year to conduct implementation activities.

Additional resources can also be found at: https://tobaccoeval.ucdavis.edu/news/learning-how-make-most-your-evaluation-team or https://tobaccoeval.ucdavis.edu/evaluation-plan 

WHEN THINGS CHANGE

Despite the best laid plans, things can change. Allies can withdraw their support, city councils can move ahead and adopt a tobacco control-friendly policy unexpectedly, external factors like an epidemic may disrupt the ability to collect data in person, or activities in the plan may no longer seem like a good fit. 

The good news is that there are several ways to deal with this! Consult with the assigned program consultant (PC) to make any of the following changes to the plan: 

  1. Exchange the activity with a new evaluation activity of equivalent effort (% deliverable) that would add value to the work. For example, exchange a pre/post online survey having a 3% deliverable with a set of post policy implementation key informant interviews with a 3% deliverable.
  2. Drop the activity but increase the output and amount of effort of other existing activities equivalent to the % deliverable for the dropped activity (e.g., expand the sample size of a public opinion poll, adding new data collection sites to an observation, etc.). The increase could be spread out over several existing activities or added to just one (if that makes sense to do so). 
  3. Begin work in a completely new jurisdiction. If other options do not seem like a good use of project effort, it could make sense to collect baseline data for future work on a similar objective that could be continued in the next funding period.
  4. Take the loss. If it is not feasible to replace the unneeded activity with something else, the project may just have to return the dollar equivalent of the budget for the percent deliverable of the activities that were not completed.

If a logical case can be made for how a change will help the project move the objective forward, it is more likely to get the approval from the PC to alter the plan. 

When it becomes apparent that one or more of the evaluation activities is no longer a good fit, look at the plan in OTIS and see what was intended, the purpose of the activity, the audience, when was it scheduled, and how the project planned to use the data. Figure out if the informational needs are still the same but requires a new way to collect the data OR whether it there is a need to rethink what evaluation efforts would benefit the project. Talk over ideas with the Tobacco Control Evaluation Center to collaboratively come up with a reasonable proposal to present to the PC. 

WORKING WITH EVALUATORS

There are a variety of evaluation-related roles associated with CTCP-funded projects. This section describes the various roles, the role of the external evaluator, as well as decisions that may be made by each project when working with an external evaluator.

EVALUATION-RELATED ROLES AND RESPONSIBILITIES 

Although it is easy to think that evaluation activities are conducted solely by the external evaluator, evaluation activities are a coordinated set of roles and responsibilities between the entire evaluation “team,” as illustrated in the table below. 

Same Evaluation Team Responsibilities

Project Director (PD)Internal Evaluation Project Manager (EPM)

Internal 

Evaluator (IE)

External Evaluator (EE)Other Project Staff (OPS)Tobacco Coalition (TC)
Day-to-day managementMonitors data collectors in the fieldCollaborates on evaluation planDevelops evaluation planProvides input on evaluation planParticipatory data analysis
Coordinates team activitiesEnsures activities are conducted on timePerforms advanced data analysis, if necessaryDevelops instruments and databasesHelps with data collection logisticsServes as adult chaperones
Recruits data collectorsApproves data collection instrumentsPerforms data translation for the programDoes analysis and reports for most evaluation activitiesTrains data collectors and monitors data collectionParticipates in data collection
Responsible for evaluation plan implementationWorks with PD to ensure evaluation informs interventionDocuments activitiesResponsible for FERs and BERs

Provides 

outreach

Provides

insights

All roles on the evaluation team are interdependent. Every member of the evaluation team, whether it is the Project Director, the External Evaluator, or project staff doing data collection, are dependent upon one another directly or indirectly to accomplish the evaluation activities.

THE IE, EE, AND EPM DEFINED

In order for the evaluation team to work together effectively, it is important to know the basic differences between the Internal Evaluation Project Manager (EPM), the Internal Evaluator (IE) and the External Evaluator (EE).

  • The role of the Internal Evaluation Project Manager (EPM) is to provide oversight in the development and implementation of the Evaluation Plan. Having someone internal to the program that has knowledge of evaluation and that can ensure connections between the evaluation and the intervention is vital to project success. The title of the Internal Evaluation Project Manager is a role unique to Local Lead Agencies, but most procurements require someone to provide this evaluation oversight. This role is often filled by the Project Director or the Internal Evaluator (IE). 
  • The External Evaluator’s role, in contrast to the EPM and IE, includes designing the evaluation plan, developing data collection instruments, conducting analyses, and preparing evaluation reports. The External Evaluator is not an epidemiologist for the program but conducts evaluation based on the program’s objectives. The External Evaluator can be an individual, a research institute, or consulting firm and is intended to be an objective outsider. This does not mean that the project only communicates with the External Evaluator during reporting time, just that part of their role is to be objective as to which strategies work and what does not. 
  • The Internal Evaluator (IE) can take on some evaluation activities or functions of the project, such as advanced data analysis, data translation, ensuring the tobacco project’s objectives are integrated into the larger organization’s strategic direction, and documenting activities. The IE may be the epidemiologist for the Local Lead Agency, the Project Director/Project Coordinator or another staff position who supports the project.

The minimum qualifications for the EPM and IE are the same as for the EE. Each evaluator position must have a combination of formal education and experience to be eligible for the online Local Program Evaluator directory.

CLARIFYING AND AGREEING ON WHO DOES WHAT

Each of the evaluator positions defined above has a unique set of roles and responsibilities. However, there are some evaluation activities or tasks that are best completed by a different member of the evaluation team. For example, the Policy Record Review, Media Activity Record and Membership/Participation Tracking are best done by the staff coordinating the campaign in order to immediately use the information. If desired, evaluators may provide overall analysis to summarize broad changes over time, but the priority is for projects to use the research in real time.

It is important to discuss and document the agreement for any data collection, data collector training, and participation in any meetings and events, as well as data cleaning, analysis, and reporting in the subcontract with an External Evaluator and any other subcontractors.


ANALYZING DATA

TYPES OF ANALYSES

As previously mentioned, evaluation is driven by data. From designing an evaluation plan to reporting results, there is a need for data. Analyzing data is a systematic process that reveals insights, describes patterns, and shows relationships between the variables under inquiry. This will allow the evaluator to convey results with more rigor and depth and tell a better story behind the data.

ANALYZING QUANTITATIVE DATA

With quantitative (numeric, countable) data, use descriptive statistics such as frequencies, percentages, range, and measures of central tendency (mean, median, and mode) to describe findings from the data collected. Frequencies involve counting the number of times something occurs. Percentages involve calculating the proportion of times something occurs compared to the whole. The mean or average is calculated by adding responses and dividing by the total number. The median is the middle value or midpoint where half of cases fall below and half fall above this value. The mode is the most commonly occurring answer in the data. The range is the lowest and highest value to give an idea of the variety of responses in the data. For details see https://tobaccoeval.ucdavis.edu/sites/g/files/dgvnsk5301/files/inline-files/QuantitativeDataAnalysis.pdf

Use inferential statistics to extrapolate findings from sample data in order to generalize findings for the entire population. Common inferential statistical tests include Chi-Square, Cohen’s Kappa Value, McNemer Test, and Two Sample Proportion Test. Use Chi-Square to compare two categorical variables where observations are unrelated. Use Cohen’s Kappa Value for calculating interrater reliability. Use McNemer Test for comparing two categorial variables with only two possible responses options such as yes/no or agree/disagree. Use Two Sample Proportion Test to compare two proportions from two different samples. For details, see https://tobaccoeval.ucdavis.edu/sites/g/files/dgvnsk5301/files/inline-files/Statistics_Research.pdf 

ANALYZING QUALITATIVE DATA

With qualitative (non-numeric, text/narrative or image-based) data, use content analysis to identify patterns or themes in the data. Data are coded using a qualitative data software or by reading and organizing the data by topic, evaluation question, respondent type, or other logical categories. Highlight quotes or ideas that jump out as important. Unlike quantitative data where analysis typically happens at the end, qualitative data analysis involves making connections as the process progresses both within and between categories defined in the coding process. For tips and details, see https://tobaccoeval.ucdavis.edu/sites/g/files/dgvnsk5301/files/inline-files/2018-05-17-TT17-Analyzing%20Qualitative%20Data.pdf and https://tobaccoeval.ucdavis.edu/sites/g/files/dgvnsk5301/files/inline-files/CodingQualitativeData_0.pdf 

DATA CLEANING

Both qualitative and quantitative data collection is prone to human error. There could be missing data points or data in the wrong places. Checking for these issues is essential before analyzing any data. Besides irregularities due to human error, some data might just be naturally odd. The distribution of the data could be very skewed. To run any statistical tests, one should address these irregularities or risk producing heavily biased results. Besides correcting for those irregularities, data can also get very messy. Rarely ever in a structured format, there is almost always a need to clean or organize data to be more manageable. Data analysts spend more than half of their working hours cleaning data. The reason is because programming software such as NVivo, Excel, SAS, or SPSS often process the dataset in a specific way. Cleaning data is essential to processing and analyzing data in a correct and efficient manner.

ANALYZING DATA WITH A HEALTH EQUITY FOCUS

Often, for reporting convenience, categorical detail is sometimes collapsed into broader amalgamated categories. This commonly occurs when race/ethnicity questions have specific ethnic group categories or write-in options. Rather than report out results for the 8-20 Asian ethnic groups required by California state law, many reports combine numbers for all of these into an “Asian or Asian American” category. But in some cases, keeping the data disaggregated can make a huge difference in revealing trends that would have otherwise been hidden. For example, when disaggregating Asian American, Native Hawaiian, and Pacific Islander (AANHPI) data based on ethnicity, it was reported that certain subgroups such as Cambodian, Vietnamese, and Korean men used tobacco at a disproportionately higher rate than other Asian counterparts. This helped debunk the myth that the AANHPI population was healthier and used less tobacco than other races when this is clearly not the case. Disaggregating data is important for health equity because analyzing these tobacco rates by subgroups allows for more unbiased and transparent representation of certain demographics. This illustrates the importance of not only analyzing disaggregated data, but also collecting it. You cannot analyze information in a disaggregated fashion if your data collection instruments do not differentiate specific sub-categories in the first place.

In situations with small sample sizes, it is common to roll up smaller categories in order make stronger conclusions about the data. However, it is recommended to also present the disaggregated data according to the actual responses provided for the purpose of transparency and advocacy. It is also more accurate because the person cleaning and analyzing the data does not need to make assumptions about respondent or combine a response with those that may or may not match the intended response.


EVALUATION REPORTING

LEVELS OF EVALUATION REPORTING

Evaluation reporting is a process of reflecting on and documenting what happened and what worked – both at an immediate level as well as a meta level (or the process itself) at the end of the funding period. There are multiple levels of reporting, each with their own audiences, timing, and purpose. More information about each level is covered in the next few sections.

Level of Evaluation ReportingPrimary AudienceTimingPurpose
Sharing ResultsPolicy makers, community leaders, coalition, general public, etc.Depends on the phase of campaign and how the information is being used. Inform, persuade, and motivate to take action on the issue.
Activity Summary ReportProject staff (and data sources)As soon as possible after data collection and analysis is complete

Learning and reflection.

Inform next steps. 

Document organizational history. Give voice to data sources 

Progress ReportCTCPEvery six months, as prompted in OTIS

Accountability

Course correction

Evaluation ReportPeers and stakeholdersAt end of funding period or when objective is achieved

Share strategies and knowledge

Comprehensive view of the objective

The amount of information shared depends on the audience, timing, and purpose. For example, an evaluation activity summary is what the evaluator or project staff writes up about the method and results of each evaluation activity in the plan. (Note: A copy of the data collection instrument should always be included as an appendix to any write up of results.) 

This information is used by the project to make decisions about the timing and/or direction of what should come next in pursuing the objective. It also serves to document the organizational history about approaches taken, with whom, when, what happened, and with what degree of success. This can serve as a roadmap for future efforts, so the project does not need to re-invent the wheel when doing similar activities. 

A de-identified version of the findings should be shared back with the stakeholders who served as information sources (those who were interviewed, observed, or surveyed).

For more details about what should be included in each level of reporting and how each can be used, see https://tobaccoeval.ucdavis.edu/sites/g/files/dgvnsk5301/files/inline-files/ActivitySummaries.pdf

PRIMARY AUDIENCE OF EACH REPORT

Depending on the level of reporting, the primary audience changes as the table above indicates. Throughout all levels, the project itself is the ultimate beneficiary and main audience of the reflection, analysis, and reporting process. Having a record of tactics tried and results achieved informs project efforts in the short term as well as for work still to come.

FINAL SAY OVER WHAT TO INCLUDE IN THE REPORT

While every report most likely relies on components or contributions from various stakeholders (information sources, project staff, project director/coordinator as well as the evaluator), the legitimacy of the reporting ultimately rests upon the shoulders of the report author(s). External evaluators are hired specifically to provide an objective viewpoint of project outcomes, and the deliverables they are contracted to produce should reflect their best understanding of the situation. It is a best practice to discuss constructive findings, even though they may seem negative or controversial, with the project in advance of finalizing/publishing the report so that they can be contextualized, but the final wording of the report is up to the report author, not the project.

TIMING OF REPORTING 

Reflect on and write up the results of activities and project efforts as soon as possible once data collection and analysis is completed. This is when the details are freshest in mind and when lessons learned can be used to make decisions for next steps. Even portions of the final evaluation reports can be written well before the end of the funding period. The writing can always be revised if later reflection shifts viewpoints about their importance or outcomes. 

Writing up results as soon as possible also helps retail institutional memory especially when there are staffing changes. For example, a staff member may leave unexpectedly in year two, and the staff that jump in to take over will not have the knowledge when it comes to writing about those events for the report due in year four. 

USING THE INFORMATION OR PARTS OF THE REPORT 

Reporting should be used in real time, as soon as possible, to support or inform project efforts, always shared back with information sources (data equity and reciprocity), and/or repackaged for other uses/end-users (as content for presentations, fact sheets, talking points, social media posts, etc.).

Resources:

SHARING RESULTS

Evaluation reporting is a process of reflecting on and documenting what happened and what worked – both at an immediate level as well as a meta level at the end of the funding period. There are multiple levels of reporting, each with their own audiences, timing, and purpose. This section focuses on Sharing Results.

Level of Evaluation ReportingPrimary AudienceTimingPurpose
Sharing ResultsPolicy makers, community leaders, coalition, general public, etc.Depends on the phase of campaign and how the information is being used. Inform, persuade, and motivate to take action on the issue.
Activity Summary ReportProject staff (and data sources)As soon as possible after data collection and analysis is complete

Learning and reflection.

Inform next steps. 

Document organizational history. Give voice to data sources 

Progress ReportCTCPEvery six months, as prompted in OTIS

Accountability

Course correction

Evaluation ReportPeers and stakeholdersAt end of funding period or when objective is achieved

Share strategies and knowledge. 

Comprehensive view of the objective

Results can be shared in different forms including data visualizations, one-page highlights, infographics, or executive summaries, community presentations, fact sheets, educational packages, and other reports. Data translation and dissemination can be done by the Internal Evaluation Project Manager, the External Evaluator or Project staff. If data translation and dissemination services are desired from the External Evaluator, it should be negotiated prior to the subcontract development or amendment and specified in the External Evaluator’s scope of work deliverables.

For more details about what should be included in each level of reporting and how each should be used, see https://tobaccoeval.ucdavis.edu/sites/g/files/dgvnsk5301/files/inline-files/ActivitySummaries.pdf

DATA OWNERSHIP

Data ownership is the acknowledgement that the information collected by a project does not belong to that project alone but also to its data sources. In the name of fairness, appreciation, recognition, and equity, it is imperative that results from any data collection effort be shared with those who supplied the information. We cannot simply enter a community and extract its knowledge, experience, and opinions without any response in return; to do so would perpetuate distrust and hostility. 

Another way to honor community sources and contributors is to include some kind of attribution statement in the write-up of findings. The California Tobacco Control Program (CTCP) requires a statement of acknowledgment of the funding source:

“© [current year]. California Department of Public Health. Funded under contract # XX-XXXXX.” 

An additional recognition or appreciation note can also include a list of activity participants, data sources, data collectors, and other contributors. This small addition can be incredibly meaningful and help participants feel valued.

RECIPROCITY

Reciprocity in data sharing refers to the process of sharing results with the community from which the data was collected. As the We All Count data equity framework notes, evaluators can achieve greater health equity by making choices that break down barriers. Sharing results is one way to do this.

Sharing evaluation results with audiences can be done in a variety of ways: through presentations at meetings/forums, one-page highlights, Executive Summaries, participatory data analysis (aka “data party”), social media, etc. 

The first step in sharing results is to identify the audience(s) for the findings. Audiences can include people that assisted with the data collection, e.g., youth and adult data collectors for a public intercept survey, litter clean up or the like. Audiences can also be policy makers, advisory board or coalition members, local youth and cultural groups, and other CTCP-funded projects, etc. 

The second step is to determine what format or approach the audience would prefer. For example, data collectors might enjoy a data party to review the draft findings so that they can provide input into the final conclusions and recommendations. 

Considering format: Tenants of smokefree multi-unit housing might like a one-page highlight sheet regarding the observation survey conducted at their complex. People with low literacy might enjoy an interactive, intuitive web animation. Folks without an Internet connection will require a hard copy format. Tailor materials to meet stakeholders’ needs.

As with data translation, sharing results can be done by the Internal Evaluation Project Manager, the External Evaluator or Project staff. If sharing evaluation results services are desired from the External Evaluator, it should be included in the project deliverables. 

DATA EQUITY

Data equity in the translation and dissemination of findings refers to a consideration for the flow of information or the data transfer between stakeholder groups and the level of influence of each. 

For example, every project (and every objective) can have different stakeholders. It can be useful to identify the delivery of data between each group (e.g., a report to CTCP, community residents participating in a public opinion poll giving data to the project). 

Also identify each of the groups’ directions of influence on other groups (e.g., funder influences the project, the project influences the community.) This step allows the project to have a discussion about how to aim for a community focus or at least more influence to balance the relationships. 

Too often, there is little data equity among groups, especially the community. Community members are frequently the source or collectors of data, but do not have a say in the decisions that are made with this data. Projects must try to focus on the community having influence in order to balance it out (e.g., participatory sense-making). Data equity and data ownership is a two-way communication channel. Providing information gathered to affected communities in an accurate and timely way is a fundamental ingredient in building trust with communities. Showing appreciation for data sources can go a long way toward transparency and relationship building. This is an aspect of reciprocity as well. 

For more information, see We All Count.

LEVERAGE EVALUATION INTO POLICY ACTIONS THAT PROMOTE HEALTH EQUITY 

According to the Tobacco Education and Research Oversight Committee (TEROC) Master Plan, priority populations are disproportionately targeted by the tobacco industry, and consequently have higher rates of tobacco use and tobacco-related disease compared to the general population. These populations also experience greater secondhand smoke exposure at work and at home.

Priority populations include 

African Americans, American Indian and Alaska Natives, Native Hawaiians and Pacific Islanders, Asian Americans, Hispanic/Latinx, People of low socioeconomic status; People with limited education, including high school non-completers; LGBTQ people; Rural residents; Current members of the military and veterans; Individuals employed in jobs or occupations not covered by smokefree workplace laws; People with substance use disorders or behavioral health issues; People with disabilities; and School-age youth

For that reason, a focus on health equity ensures that those populations with the greatest disparity receive equitable focus. By sharing information with policy makers regarding tobacco-related disparities, projects can leverage health equity into policy actions. 

ACTIVITY SUMMARY REPORTS 

Evaluation reporting is a process of reflecting on and documenting what happened and what worked – both at an immediate level as well as a meta level at the end of the funding period. There are multiple levels of reporting, each with their own audiences, timing, and purpose. This section focuses on the activity summary report level.

The amount of information shared depends on the audience, timing, and purpose. For more details about what should be included in each level of reporting and how each should be used, see https://tobaccoeval.ucdavis.edu/sites/g/files/dgvnsk5301/files/inline-files/ActivitySummaries.pdf

Level of Evaluation ReportingPrimary AudienceTimingPurpose
Sharing ResultsPolicy makers, community leaders, coalition, general public, etc.Depends on the phase of campaign and how the information is being used. Inform, persuade, and motivate to take action on the issue.
Activity Summary ReportProject staff (and data sources)As soon as possible after data collection and analysis is complete

Learning and reflection.

Inform next steps. 

Document organizational history. Give voice to data sources 

Progress ReportCTCPEvery six months, as prompted in OTIS

Accountability

Course correction

Evaluation ReportPeers and stakeholdersAt end of the funding period or when objective is achieved

Share strategies and knowledge 

Comprehensive view of the objective

An activity summary is a deliverable that documents how a project carried out an evaluation activity. It should be written up by project staff and/or the evaluator right after the activity finishes and those involved reflect on how it went. That way, all the details of what happened are captured while everything is fresh in mind. 

Although summaries are almost always one of the deliverables required as part of the progress report (along with data collection instruments and protocols), do not wait until progress reports are due to write it. The project should be using the results immediately to shape and inform other activities that come next. 

A de-identified version of the summary (removing all mention of individuals and perhaps organizations) should be shared back to information sources so they can see the results and what conclusions were drawn. An exception to this may be the possibility of revealing results that could impact a community’s campaign strategies in a way that may impact progress. In these cases, a delay in sharing results should be considered if it impacts or reveals a campaign strategy.

INCLUDE W.A.H.U. IN ACTIVITY SUMMARIES

At the conclusion of an intervention or evaluation activity, it is a good idea to document how things went so that projects can replicate the activity in the future and report on progress toward the objective. 

Describe the W.A.H.U.:

  • WHY was the project doing the activity? What was it supposed to accomplish? 
  • What did the ACTIVITY consist of? Who was involved/targeted? When and where did the activity take place? What were the main talking points? What materials were used/shared? What tactics were employed? 
  • What HAPPENED during/as a result of the activity? What reactions or support did it provoke? 
  • What UTILITY did the project have for the objective? How did the activity support/inform next activities? 

For some activities, a few sentences will suffice; others will require more depth and detail. 

The data collection instrument and protocol should be included as an appendix to any write up of evaluation results so that readers can assess the validity of findings.

DATA CONFIDENTIALITY AND DATA PRIVACY

Typically, unless there will be some kind of project follow up with participants, it is best to assign every individual or entity a unique ID code and use that in any data files (keep a codebook that links the ID number to individuals in a separate file). If there is a need, summary reports for project eyes only can include identifying information such as participant names, titles, and contact information. But versions of those reports should NOT be uploaded with progress reports. 

While projects cannot absolutely guarantee a source’s confidentiality or anonymity, they can say that responses will not be identified or shared by name and that responses will be reported in aggregate with those made by others. To keep this promise, there are a number of things projects can and should do as a regular practice when reporting on intervention or evaluation activities.

  • Data sets: Projects often need to keep track of those that participate in program activities or who said what in meetings, interviews, or focus groups. As a result, projects may collect personal information such as names and contact information of participants. To create an extra layer of separation, keep names and contact information apart from data sets whenever possible. Instead, assign each individual with a unique ID number that corresponds to their name and contact information in a separate document. Data sets are primarily intended for internal project uses and are not typically shared beyond project staff and evaluators.
  • Activity summaries: When project staff or evaluators write summary reports for internal project purposes, it can sometimes be appropriate to identify certain individuals by name and role. Public figures such as political leaders, elected officials and agency personnel can fall into this category. However, individual community members should not be identified by name. Instead use the ID number or descriptors like their job title, population group, school, or store type.
  • Progress and evaluation reports: Once portions of an activity summary write up are to be shared outside the project—in progress or evaluation reports, for example—all individual names should be stripped from the document. Instead, refer to people by role and/or organization or report results in the aggregate. This can lend weight to the results while still minimizing the exposure of individuals.

SYSTEM FOR DOCUMENT NAMES

When uploading to OTIS into the document repository, remember to label the files by its evaluation activity number (e.g., 4-E-5) AND a description of what it is, e.g., the instrument vs the summary report. 

For example: 

4-E-5 Post Training Assessment – Instrument

4-E-5 Post Training Assessment – Results

1-E-2 Media Activity Record – Log

1-E-2 Media Activity Record – Report 

1-E-2 Paid Media Tracking Form

3-E-2 Focus Group Guide

3-E-2 Focus Group Results

PROGRESS REPORTS

Evaluation reporting is a process of reflecting on and then documenting what happened and what worked – both at an immediate level as well as a meta level at the end of the funding period. There are multiple levels of reporting, each with their own audiences, timing, and purpose. This section focuses on the progress report submitted to CTCP in OTIS at regular intervals.

The amount of information shared depends on the audience, timing, and purpose. For more details about what should be included in each level of reporting and how each should be used, see https://tobaccoeval.ucdavis.edu/sites/g/files/dgvnsk5301/files/inline-files/ActivitySummaries.pdf

Level of evaluation reportingPrimary AudienceTimingPurpose
Sharing ResultsPolicy makers, community leaders, coalition, general public, etc.Depends on the phase of campaign and how the information is being used. Inform, persuade, and motivate to take action on the issue.
Activity Summary ReportProject staff (and data sources)As soon as possible after data collection and analysis is complete

Learning and reflection.

Inform next steps. 

Document organizational history. Give voice to data sources 

Progress ReportCTCPEvery six months, as prompted in OTIS

Accountability

Course correction

Evaluation ReportPeers and stakeholdersAt end of funding period or when objective is achieved

Share strategies and knowledge 

Comprehensive view of the objective

Progress reporting in OTIS consists of two parts: I) the narrative in the comment box that sums up the progress made on that activity; and II) the deliverables or tracking measures that indicate the quality of the work typically in an activity summary report.

  1. The narrative in the comment box that sums up the progress made on that activity. This brief overview of what was completed and what difference it made allows the CTCP program consultant to get a sense of how the work on the objective is proceeding. This narrative should include the quantity of activities done during that reporting period, dates and location(s), status, and when applicable, explains WHY things went differently than planned and HOW and WHEN they will be completed. This should also tie to challenges and barriers section.
    • Activity completed: “In community A we completed x number between date-date, and in community B we completed ## between date-date, see report attached [insert file name]. To date, ## have been completed in total”
    • Activity not completed: “We completed x of y number in z community. Things did not go as expected because... (e.g., evaluator did not have time to complete the report). This will be submitted in the next reporting period (or suggest alternative plan).” 
  2. The tracking measures identified in the plan should be uploaded into the document repository to provide detail about the specifics of what happened. For a data collection activity like a public opinion poll, a copy of the survey instrument and a summary report describing the methodology used as well as the findings would be uploaded into the document repository in OTIS to accompany the activity narrative in the comment box. 

Sometimes the survey instrument be submitted as a tracking measure once it is developed even if it was in an earlier progress reporting period. The summary report with instrument included as an appendix can then be uploaded after the evaluation activity is completed in a later progress reporting period.

EVALUATION REPORTS

Evaluation reporting is a process of reflecting on and documenting what happened and what worked – both at an immediate level as well as a meta level at the end of the funding period. There are multiple levels of reporting, each with their own audiences, timing, and purpose. This section focuses on the evaluation report level.

Level of evaluation reportingPrimary AudienceTimingPurpose
Sharing ResultsPolicy makers, community leaders, coalition, general public, etc.Depends on the phase of campaign and how the information is being used. Inform, persuade, and motivate to take action on the issue.
Activity Summary ReportProject staff (and data sources)As soon as possible after data collection and analysis is complete

Learning and reflection.

Inform next steps. 

Document organizational history. Give voice to data sources 

Progress ReportCTCPEvery six months, as prompted in OTIS

Accountability

Course correction

Evaluation ReportPeers and stakeholdersAt end of the funding period or when objective is achieved

Share strategies and knowledge 

Comprehensive view of the objective

For most projects, a separate evaluation report is required for each objective. The evaluation report is where the project reflects on what the objective was trying to achieve, how the project went about it and what turned out to be pivotal events or strategies – those that moved the work forward as well as any factors or situations that were obstacles. 

The point is to assess the efforts toward the objective in all target jurisdictions, articulate what was achieved in the end, what lessons were gleaned, what still needs to be done, and recommendations for what to do differently next time.

Evaluation reports do not need to describe every single activity but do need to provide enough of an overview so readers can get a sense of the strategies used and how things went in the especially crucial or pivotal activities. 

Provide a broad picture of all the work the project poured into this effort. Describe the WAHU of key activities briefly but with enough detail so readers could follow similar strategies and footsteps. Be sure to identify what happened as a result. Additional detail (or an activity summary) can be attached along with the data collection instruments in the report appendix to supply the specifics of what occurred.

Depending on the specifications of each procurement, projects need to follow the Tell Your Story evaluation report guidelines or other instruction from CTCP. Check the funding guidelines to see which reporting requirements to follow.


USING EVALUATION

ENDING COMMERCIAL TOBACCO USE IN CALIFORNIA

Many California communities lack comprehensive tobacco control policies. This might lead some to draw the conclusion that because a community lacks certain laws and protections, it must need those laws and protections. However, the fundamentals of public health require evidence that a problem exists before we begin to determine solutions that will solve the problem. This is where local evaluation is essential to help demonstrate that a problem exists within a community and then engage the community to identify and advocate for the solutions to the problem. 

Evaluation is not just about the data or results. Evaluation also provides multiple opportunities to involve the community members to push for change they want to see. Each phase of the evaluation life cycle highlights various entry points for community engagement, buy-in, and sustainability of changes long after a project’s efforts have ended. These opportunities are investments in the community and highlight existing resources and potential. Building up the skills of community members also provides a pipeline for future public health practitioners. 

Evaluation provides tools for working toward health equity, such as the 

  • routine collection and use of demographic data to understand who is affected by health inequities.
  • reciprocity or sharing results with data sources so they can participate in using it in their specific goals toward health equity. 
  • highlighting community voices for more impactful, sustainable, and authentic change. 

Data help to highlight where change is most needed, set priorities for how to address issues, and to follow the progress toward a desired community goal. The involvement of a wide range of stakeholders is a hallmark of evaluation; stakeholders must be involved in different stages of an evaluation project to ensure the results are meaningful and useable.

Evaluation activities can often be therapeutic allowing participants to process some of their lived experiences, honoring the past while building a better future. It brings people together to have meaningful interactions around the common goal of improving the health of the community starting with ending commercial tobacco use in California.

EVALUATION GLOSSARY

The vocabulary provided in this section of the guide includes a description of words and phrases commonly used in tobacco control evaluation.

Asset: Assets represent an area of work which projects can choose to address with an objective. Assets represent factors that promote and sustain tobacco control efforts, such as community engagement and inclusivity, capacity building, cultural competence, community planning, and funding for tobacco control activities. A complete list of assets is included in the Communities of Excellence Needs Assessment Guide: https://www.cdph.ca.gov/Programs/CCDPHP/DCDIC/CTCB/CDPH%20Document%20Library/Community/ToolKitsandManuals/2020CXManual.pdf 

Asset Mapping: The process of identifying and providing information about a community’s strengths and resources. It allows projects to begin from their strengths rather than a list of shortcomings as in traditional needs assessments. See https://tobaccoeval.ucdavis.edu/sites/g/files/dgvnsk5301/files/inline-files/CoalitionAssetMappingTool-Final.doc or https://vimeo.com/339624482 

Coalitions and Evaluation: In the context of California Tobacco Control, coalitions need to be assessed every year. To make this activity more efficient, the Tobacco Control Evaluation Center (TCEC) has a coalition satisfaction survey service where projects can provide a link to their coalition members to respond to a survey, then TCEC will give projects a link to their results: https://www.surveymonkey.com/s/SampleCoalitionSurvey 

Coalitions can help make evaluation more useful. Coalitions can serve as advisory boards to projects and ensure that culturally competent strategies are being used. For example, coalitions can review data collection instruments to ensure they are appropriate for the target audience, be included in participatory data analysis meetings to provide context and interpretation for evaluation findings, etc. See also Cultural Humility in Evaluation and see the presentation on coalitions and evaluation here: https://www.tcspartners.org/Campaigns/SpiritOfCoalition/index.cfm

Content Analysis: A method to analyze qualitative data by finding common themes or enlightening data points. See https://tobaccoeval.ucdavis.edu/analyze-data 

Cultural Humility in Evaluation or Culturally Competent Evaluation: Conducting evaluation with those affected rather than for those affected. Evaluation practice that is actively cognizant, understanding, and appreciative of the cultural context in which the evaluation takes place and employs culturally and contextually appropriate methodology. Can more broadly be described as tailoring materials to the audience. See https://tobaccoeval.ucdavis.edu/culture and https://us17.campaign-archive.com/?u=e193fa22969b0b11ffdbb737f&id=100fb22b83 

Data Collection: The way facts about a program or its activities and its outcomes are gathered. In California Tobacco Control, there are nine main methods for data collection: education/participant survey, focus groups, key informant interview, media activity record, observation, policy record, public opinion poll/intercept survey, young adult tobacco purchase survey, and other (e.g., Photovoice, Google or Facebook Analytics).

Data Collection Instrument: The set of questions or data points used to identify information sources and collect data (e.g., public intercept surveys, key informant interviews, focus groups). The data collection instrument is often a survey, observation form, or interview guide which comes with specifications for how to use the instrument. For sample instruments, see https://tobaccoeval.ucdavis.edu/searchDCI

Data Collection Mode/Logistics: Mode refers to the manner in which the data is administered, such as in person vs. Online or by pen-to-paper vs, handheld device. Logistics states when data collection will take place, what locations will be selected to collect data, who is collecting the data, and how will they be trained and assessed for readiness. It also states the mode, e.g., pen and paper, handheld device, video camera, and how many times, locations, or responses a data collection activity aims to cover.

Data Collector Readiness: After and/or during a data collector training, data collectors are assessed through surveys and/or observations to ensure that data collection protocols are being followed as designed. This is important so that all data collectors use the same methodology for asking questions and/or observing behaviors or the environment. This reduces any variability or bias that could be caused by the data collector. See also: Inter-rater Reliability.

Data Collector Training: This type of training is conducted when there is more than one person involved in collecting data to ensure they implement the data collection methodology in precisely the same way. See also data collector readiness.

Data Equity: A systematic approach to evaluation that stresses the importance of reciprocity or giving evaluation results back to the sources of information; data sovereignty. It is also a framework and series of reflection questions that provides ideas for incorporating cultural humility in evaluation. See https://weallcount.com/the-data-process/ and http://www.jliconsultinghawaii.com/blog/2020/7/10/data-equity-what-is-it-and-why-does-it-matter.

Deliverable/Percent Deliverable: The output or work product that is produced as proof that an activity was completed. Percent deliverable refers to the portion of the budget allocated to an activity. Deliverables and percent deliverables are used for accountability and transparency. Often, procurements require that a minimum percentage of deliverables be allocated to evaluation activities.

Descriptive Statistics: Numbers and tabulations used to summarize and present quantitative information concisely. This can include calculating the mean, median, and mode or percentages and frequencies of responses. Other statistical analyses such as chi-square, t-tests, or confidence intervals may also be calculated. See https://tobaccoeval.ucdavis.edu/analyze-data 

Dissemination: The distribution of evaluation findings to stakeholders. This can include products such as reports, fact sheets, press releases, letters to the editor, phone interviews, webinars, presentations, or other types of audio or visual presentations. See Reciprocity and Reporting.

Education/Participant Surveys: Surveys used to assess knowledge, skills, or unmet need before/after an event or training. If conducted before a training, results can be used to tailor the training to the level of the participants. If conducted after a training, results can be used to improve future trainings with similar audiences or topics. 

End Commercial Tobacco Campaign Evaluation: A combination of surveillance and evaluation activities that includes observation of tobacco retail stores, parks and beaches, and multi-unit housing, public opinion poll, key informant interview, and community engagement tracking. 

End Use Strategizing: A process of developing evaluation activities and instruments with the ultimate goal, purpose, and use of collected data in mind. This ensures that the information gathered will serve the intended purpose by starting at the end and thinking about what is needed at each phase of planning in order to obtain the needed pieces of information. See https://tobaccoeval.ucdavis.edu/end-use-strategizing 

Evaluation Activities: Specific actions or processes relating to evaluation planning, data collection, analysis, reporting, dissemination, and use. Project workplans consist of intervention activities and evaluation activities that work together toward achieving a specific objective.

Evaluation Design: The frame by which data is collected and analyzed in order to reach conclusions about program or project efficiency and effectiveness. In California Tobacco Control, the design is typically non-experimental or quasi-experimental.

Evaluation Plan: In California Tobacco Control, the evaluation plan is detailed in OTIS and specifies what will be done, how it will be done, who will do it, when it will be done, why the evaluation is being conducted, and how the findings will be used. It also identifies the tracking measures (deliverables), percentage deliverables and responsible parties.

Evaluation Plan Type: In California Tobacco Control, this refers to the type of evaluation that corresponds with the objective. Choices are legislated or voluntary policy adoption and/or implementation, individual behavior change, other with measurable outcome, and other without measurable outcome.

Evaluation Summary Narrative: The section of the scope of work that describes the evaluation plan in a coherent order (e.g., chronological, process vs outcome), connecting how evaluation activities will be used to improve interventions and strategies. Describes what is expected to change and how the change will be measured. States the plan type, evaluation design and if Outcome Evaluation will be conducted. Describes the project’s plan for sharing and disseminating findings.

Focus Groups: A series of facilitated discussions with groups of people selected for their relevance to a specific topic led by a trained facilitator to explore insights, ideas, and observations on a topic of concern. Note that the emphasis is on multiple groups, not on a single focus group. Information obtained is compared between and across groups in order to identify common themes and differences.

Formative Evaluation: Evaluation conducted during project implementation with the aim of improving performance during the implementation phase. Related term: process evaluation. 

Indicator: Environmental- or community-level measures that serve as the basis for project objectives which work toward achieving tobacco control policies. Indicators cover a variety of topics including tobacco marketing, promotion, availability, and distribution; economic factors; secondhand smoke exposure; environmental impact of tobacco waste; and availability of cessation support. A complete list of indicators is included in the Communities of Excellence Needs Assessment Guide: https://www.cdph.ca.gov/Programs/CCDPHP/DCDIC/CTCB/CDPH%20Document%20Library/Community/ToolKitsandManuals/2020CXManual.pdf 

Institutional Review Board (IRB): A committee that reviews proposed research or evaluation methods to ensure that appropriate steps are taken to protect the rights of participants. Typically, IRB approval is required when research involves sensitive topics or private data that has the potential to put participants at risk. IRB approval is not generally needed in California Tobacco Control evaluation work. However, each organization or agency may have its own IRB and should be consulted when IRB approval is necessary. For topics generally not subject to IRB approval, see examples at https://hso.research.uiowa.edu/studies-are-not-human-subjects-research 

Inter-rater Reliability: The degree to which different raters/observers give consistent estimates of the same phenomenon. The goal is to reduce variability/errors caused by data collectors. See also data collector readiness.

Key Informant Interview: A structured conversation; an in-depth qualitative interview with people selected for their first-hand knowledge about a topic of interest. An interview guide is developed with potential follow-up or probing questions. Unlike surveys, key informant interviews may jump around in order or vary the interview in other ways depending on how the conversation goes. See https://tobaccoeval.ucdavis.edu/sites/g/files/dgvnsk5301/files/inline-files/2018-05-17-TT3-Conducting%20Interviews.pdf 

Media Activity Record: An activity that tracks and records the number, type, placement, and slant of media coverage achieved over a specified period of time in order to understand community awareness about tobacco control issues and the effectiveness of media efforts. See https://tobaccoeval.ucdavis.edu/sites/g/files/dgvnsk5301/files/inline-files/MAR%20specifications%20%281%29.pdf and https://tobaccoeval.ucdavis.edu/sites/g/files/dgvnsk5301/files/inline-files/MAR%20tracking%20form%204.16.2020.xlsx 

Observations: A way of gathering data by watching behaviors, events, or noting characteristics of a physical setting. See “Making Observations” at http://tobaccoeval.ucdavis.edu/data-collection/DataCollectionMethods.html.

Outcome: Note the difference between the common term and the technical term “Outcome.” The common term “outcome” (lowercase “o”) is what comes as a result or follows something; a final product, or a consequence of something. The technical evaluation term “Outcome” (capital O) refers to the measurement of the intended results of program efforts or activities. It focuses on what difference was made and how well goals were met. 

In California Tobacco Control, Outcome Evaluation assesses the change an intervention has on people or the environment. Typically, this includes changes in knowledge, attitudes, awareness, behavior, observed tobacco litter, number of citations issued, smoking incidences or complaints, etc. It is not enough to simply count the number of policies passed. Only what happens as the result of the policy implementation is an Outcome. That is why only implementation requires Outcome Evaluation, but adoption only does not. As an example, a policy record review showing the number of policies is a process evaluation activity, not Outcome Evaluation activity in California Tobacco Control.

OTIS (Online Tobacco Information System): Online portal for submitting applications for funding, monitoring activities throughout the scope of work, submitting deliverables, communicating with CTCP staff, making revisions to a workplan, viewing important resources such as the OTIS Calendar, CTCP Administrative and Policy Manual, and much more. Access must be granted by the Project Director/Coordinator at https://otis.catcp.org 

 

Participatory Data Analysis: Group-level analysis of data that allows stakeholders to help interpret and provide context to evaluation findings, serving as a check to the conclusions an evaluator or internal staff may have arrived at without this input. See also Reciprocity, Data Equity, and Coalitions and Evaluation. 

Photovoice: A participatory project where participants use photo and/or video images to capture aspects of their environment and experiences and share them with others, with the goal of bringing the realities of the photographers’ lives home to the public and policy makers to spur change. This method allows for those with different backgrounds, literacy levels, language, experiences, etc. to identifying challenges and strengths in their community using photos. 

This labor, time, and resource-intensive project involves multiple meetings for recruiting, training, analyzing, interpreting, debriefing experiences with the group, and an exhibit or showcase of results with the community. It is a very specific methodology that has been well documented in literature – See: http://journals.sagepub.com/doi/abs/10.1177/109019819702400309. 

Policy Record Review: A method of collecting background information on the history of an issue through analysis of records maintained by government agencies or other institutions. Potential sources are policy maker biographies, voting records, meeting agendas and minutes, and/or notes taken while attending meetings. Other types could include reviewing tobacco retail license citations issued or smokefree apartment lease language, etc. See: “Reviewing Media Activity & Policy Records” at http://tobaccoeval.ucdavis.edu/data-collection/DataCollectionMethods.html 

Process Evaluation: The programmed, sequenced set of activities done to carry out a program or project. Process evaluation is conducted to monitor, assess, and document how a program is working, ascertain progress, and identify if changes need to be made to improve. Process evaluation activities can be both quantitative and qualitative. See also Formative Evaluation.

Procurement: Opportunities to apply for and receive funding from the California Tobacco Control Program. Also referred to as a Request For Applications (RFA), Solicitation, Guidelines, or other terms. Details are available at https://tcfor.catcp.org/.

Public Opinion Polls/Intercept Surveys: Surveys that are conducted with a particular sample of the public designed to assess the knowledge and attitudes of a population.

Qualitative: Observations or information expressed using categories rather than numerical terms, and often involve knowledge, attitudes, perceptions, and intentions. 

Quantitative: Information that can be expressed in numerical terms, counted, or compared on a scale.

Reciprocity: The practice of exchanging information for mutual benefit. In California Tobacco Control, projects should report back to their communities about the results of intervention and evaluation activities. Since the community is often the source of data (public opinions, observations, etc.) it is important to share the results with the source.

Reporting: The process of communicating evaluation findings and making recommendations for future action. Ideally, each data collection activity is accompanied by a report detailing the methodology, findings, and recommendations. This can be a formal report, fact sheet, pamphlet, presentation, or other product that communicates evaluation findings and offers recommendations for future action. 

Representativeness: The goal when deriving a sample is to be similar to the population. Allows evaluators to make generalizations to the larger population based on a smaller sample or subset.

Responsible Party: The individual or group that must ensure a task or tracking measure is completed. This individual or group may not necessarily perform every aspect of the activity, but they ensure its completion and utility to achieving the objective. For evaluation activities, the evaluator does NOT always need to be listed as the responsible party. In fact, some evaluation activities are better done by another team member such as media activity record and policy record reviews. It may also be appropriate for coalition members to be listed as a responsible party for some activities.

Sample: A portion of the whole population of people or things to be investigated. The term census is used when the entire population is being observed or questioned, e.g., when all trainees will be given a satisfaction survey, whereas a sample is when just a portion of the trainees will be surveyed. The sample can aim to be representative of the entire population through random selection for inclusion OR it can be purposive where who/what is included is based on certain criteria rather than reflecting the makeup of the overall population.

Sample Composition: The source from which data will be gathered. For example, residents of a particular community or the general public, retailers, parks, beaches, public buildings or spaces, restaurants, multi-unit housing complexes, etc.

Sample Size: The number of units to be sampled from which data will be collected. Serves as the denominator in quantitative data analysis. See “Deriving Your Sample” at https://tobaccoeval.ucdavis.edu/evaluation-planning/EvaluationDesign.html

Sampling Method: The method by which the sampling units are selected, e.g., census, simple random, stratified random, cluster, purposive, or convenience.

Sexual Orientation and Gender Identity (SOGI): According to AB 677, SOGI data must be collected by California public health agencies anytime other demographic data are also collected.   AB 677 text states, "The Lesbian, Gay, Bisexual, and Transgender Disparities Reduction Act requires specific state departments (including California Department of Public Health, our funder), in the course of collecting demographic data directly or by contract as to the ancestry or ethnic origin of Californians, to collect voluntary self-identification information pertaining to sexual orientation and gender identity, except as specified. Existing law prohibits these state departments from reporting demographic data that would permit identification of individuals or would result in statistical unreliability and limits the use of the collected data by these state departments, as specified."

Social Media Evaluation: Analysis that assesses the extent to which social media messaging is shared by others, by whom and to whom. See http://tobaccoeval.ucdavis.edu/analysis-reporting/UsingSocialMedia.html.

Stakeholder: A person or group that has a relationship or interest in a program or its effects. The person or group can have influence over a program’s efforts and/or can be directly or indirectly affected by a program’s efforts. Stakeholder involvement is a hallmark of evaluation and must be involved in different stages of an evaluation project. 

Summative Evaluation: Evaluation of an intervention or program in its later stages or after it has been completed to assess impact, identify the factors that affected performance, measure the sustainability of results, assess the merit of the program, and draw lessons that may inform other interventions. See also Outcome Evaluation.

Theory of Change: Section of the scope of work that explains how and why the proposed combination of intervention and evaluation activities will result in the desired change. Common examples such as social norm change, community readiness, diffusion of innovation, health belief model, and others can be found at https://sbccimplementationkits.org/demandrmnch/ikitresources/theory-at-a-glance-a-guide-for-health-promotion-practice-second-edition/

Tracking Measure: The file submitted in OTIS for each activity to substantiate the deliverable.

Unit of Analysis: The people, places, or things that are being examined in an evaluation. The unit of analysis can be the coalition, a decision-making body, stores, parks, or apartments in a jurisdiction, a community of people, buildings on a campus, etc. It is important to note that the individuals that compose a unit of analysis may change, e.g., individual coalition members may leave or join. The findings, therefore, are not relevant to the specific individuals, but for the coalition as a whole.

Utility: Focus on the use of evaluation to make decisions or guide program efforts. The use of every evaluation activity should be made clear up front. If you do not know why you are doing something or how it can inform your work, then it is a wasted effort. Evaluation is designed to be useful and used and identifying the intended use of evaluation data at the planning stage allows programs to understand the intended uses and topics of investigation. More about Utilization-Focused Evaluation by Michael Quinn Patton can be found at: https://us.sagepub.com/en-us/nam/utilization-focused-evaluation/book229324#tabview=toc and http://www.wmich.edu/sites/default/files/attachments/u350/2014/UFE_checklist_2013.pdf.

Waves of Data Collection: Repeated collection of data from the same population sample conducted in the same way, using the same questions, used to measure change over time.

Web or Social Media Analytics: A set of metrics that track and reports traffic to a website or user interaction to a social media page. Often, platforms provide user information otherwise third-party programs can be used. See the Tobacco Education Clearinghouse of California’s social media resource https://www.tecc.org/social-media-toolkit/.

Youth/Young Adult Tobacco Purchase Survey: A data collection method designed to document the rate of illegal sales of tobacco to minors and young adults through the use of youth or young adult decoys who attempt to purchase tobacco products. Additional observations such as store and/or clerk characteristics are also typically collected at the same time, e.g., presence of STAKE Act signage, posting of licenses, placement of products or advertisements, self-service access to tobacco products, and behaviors or conversations by store clerks such as asking for ID.


ACKNOWLEDGEMENTS

The first comprehensive evaluation guide for California Tobacco Control programs written in 2001 was titled, “Local Program Evaluation Planning Guide.” It contained helpful tips and prompts, suggested measurements and design types, and ideas for disseminating results. In 2007, the guide was updated and titled, “OTIS Evaluation Guide.” The second version reflected guidance for the new OTIS (Online Tobacco Information System) and provided more extensive information about crafting SMART objectives, process and Outcome Evaluation, glossary of evaluation terms, and sample plans available in an accompanying CD. This third version, titled “California Tobacco Control Evaluation Guide” intends to clarify expectations for CTCP-funded projects and CTCP staff especially regarding reporting requirements. It also infuses health equity and evaluation as an important tool that is both informed by and contributes to health equity among California’s priority populations. This guide is not intended to override previous versions, but instead provides additional guidance for local level evaluation for California Tobacco Control programs.

There is much to be said about how to carry out evaluation activities. This guide focuses on information for developing evaluation plans and high-level guidance for a wide range of audiences. Supplemental resources written for evaluators and project staff about conducting the activities are continually being developed and updated to incorporate best practices and advances in the field of evaluation. For the latest information, visit the Tobacco Control Evaluation Center website at https://tobaccoeval.ucdavis.edu/.

Many people contributed their time, experience, and expertise in the development of this document. Kudos and Thank You to:

Catherine Dizon, Robin Kipke – TCEC 

Sue Haun – Local program evaluator

Cheryl Edora, Jena Grosser, Tina Fung, Lauren Groves – CTCP

TCEC review and support – Jeanette Treiber, Sarah Hellesen, Andersen Yang, Lance Jimenez, Diana Dmitrevsky, Jorge Andrews, Danielle Lippert

CTCP review and support - Humberto Jurado, Tonia Hagaman, Elizabeth Andersen-Rodgers, Rebecca Williams, Jenny Wong, Beth Olagues, 

Local program evaluators, project directors, and program staff review and support – Danica Peterson, Robynn Battle, Rhonda Folsom, Travis Satterlund, Eddy Jara, Natasha Kowalski, Evi Hernandez, Erick Rangel,