It feels crazy typing this, but I recently had my first opportunity to present in person in FOUR YEARS to a group of real-life humans at Tobacco Prevention University. The organizers did a great job envisioning and achieving the school theme, complete with a science lab where participants engaged in an activity going through an actual workplan,
Artificial intelligence (AI) is not a new concept; if you’ve ever used a program that checks your spelling and grammar, you’ve used a form of AI! However, the release of products like ChatGPT has kicked off a boom in AI technology, and it is growing more sophisticated every day.
So how does this AI boom affect our work in tobacco control evaluation? I took a deep dive into the topic to find out!
What's the point of giving feedback, anyway? Will my comments actually change anything?
If you've interacted with the Tobacco Control Evaluation Center at any point, you've undoubtedly received requests for feedback— whether on a training, a webinar, or how satisfied you were with our technical assistance.
We all have our workplans, and in an ideal world, we would complete all activities by the assigned due date without incident. However, we are all painfully aware that we do not live in a perfect world, and oftentimes things do not go according to plan. What do we do then? In this article, we explore some real-life situations in which things did not go according to plan and how some evaluators have handled it.
While out on a walk after Thanksgiving, I was having a conversation with a friend. As we walked along, crunching the leaves and fallen acorns, we began talking about what we were thankful for. Our conversation touched on something subtle and perhaps overlooked; we talked about acknowledging and thanking those around us.
In April, I was invited to participate in the American Heart Association’s Tobacco Endgame Center for Organizing and Engagement’s Affinity call to share ideas about how evaluation can inform and support project decisions and actions. Or as Paul Knepprath, the Tobacco Endgame Center’s project director, put it: How to use evaluation to win campaigns.
Entering the world of California tobacco control evaluation can be overwhelming. Due to decades of hard work, the world of California tobacco control is a vast network of collaborators and partners, with its own set of rules, acronyms and requirements.
But we have good news: Because of this well-established infrastructure, a new evaluator has a plethora of resources and support at their fingertips!
Like any new relationship, when an external evaluator takes on a new project, there will be an adjustment period as everyone gets to know each other. This may be especially true if the project is new to CTCP-funded work. Below are some considerations to consider to foster a successful partnership with them.
If possible, the evaluator and newly funded project should build more time into the contract for meetings with project staff. This may include meetings with project staff and their CTCP Project Coordinator (PC). There are multiple reasons why this is important:
It’s the end of an era at the Tobacco Control Evaluation Center. Long-time evaluation associate and former TCEC project director Jeanette Treiber has retired from her post to head for greener pastures! She won’t have to go far either—just right outside her door to the seven-acre Rio Linda farm she and her partner Deno have created.
2021 marks the start of a new contract for TCEC, and we’re excited to continue bringing our love of all things evaluation to our local lead agency partners around the state. And you may have noticed, it’s been a busy year so far! Here are some important things for LLAs to take note of:
With the end of 2020 also came the close of another TCEC contract. In just 3 short years, our talented TCEC team delivered 23 webinars, 15 newsletters, 22 papers, 34 presentations, 681 instances of technical assistance (TA), and another 1,989 additional instances of TA specifically for what ended up being the last round of data collection for Healthy Stores for a Healthy Community.
This summer I had the opportunity to participate in several professional development trainings with big name people in the evaluation world—names you see associated with foundational evaluation theories and publications—B-I-G. And these were not just short presentations of an hour or two, but 10-12 hour courses that took place over a period of weeks.
Every August, the Claremont Evaluation Center (CEC) at Claremont Graduate University offers a series of professional development courses, which introduce several topics related to evaluation.
This yearly conference is a great way for evaluators to listen and learn as well as to share evaluation-related skills and ideas, and to reflect on our profession.