A girl with a robot

AI in Evaluation: Where Do You Stand?

Exploring the Benefits and Pitfalls of AI

Artificial intelligence (AI) is not a new concept; if you’ve ever used a program that checks your spelling and grammar, you’ve used a form of AI! However, the release of products like ChatGPT has kicked off a boom in AI technology, and it is growing more sophisticated every day. 

So how does this AI boom affect our work in tobacco control evaluation? I took a deep dive into the topic to find out!

What is AI?

First of all, what do we mean when we say “AI,” anyway? Artificial intelligence is, broadly speaking, the imitation of human thought by a machine. AI is also the term given to the field of research that seeks to develop computers that can mimic human decision-making and behavior. Within this field, there are different kinds of AI programs, including: 

  • Machine Learning (ML): Machine learning involves algorithms that allow computers to learn from and make predictions based on data. For example, predictive text in smartphones uses machine learning to suggest the next word you might type based on your previous inputs.
  • Generative AI: This type of AI can create new content, such as text, images, or music, based on training data. For instance, DALL-E, an AI developed by the company OpenAI, generates images from textual descriptions, creating visuals that didn't previously exist.
  • Large Language Models (LLMs): These are a subset of generative AI, specifically designed to understand and generate human language. An example is OpenAI’s product ChatGPT, which can generate contextually relevant text based on the input it receives.

Benefits of AI

You may already know of some benefits of incorporating AI into your work. As mentioned, programs that can check your written documents for errors and suggest more precise writing are one everyday benefit of artificial intelligence, especially for those who write many reports! But AI has even greater potential than that.

AI’s biggest strength is its ability to process large amounts of complex information more quickly than a human can. It can also discover patterns within this data that a human might miss. Evaluation and research tasks like data cleaning, coding qualitative data, and data analysis could all be made more efficient with the help of AI programs. Additionally, automating time-consuming tasks like the transcription of interviews and the translation of evaluation tools and educational materials into multiple languages could allow us to focus on other priorities.

AI also has the potential to greatly help with public health surveillance and predictive modeling. For example, during the COVID-19 pandemic, AI was used to help forecast the spread of the virus.

The use of AI may also help address disparities in public health and evaluation by mitigating human bias in data cleaning and analysis. This could lead to more equitable health outcomes by ensuring that data-driven decisions are based on accurate and unbiased information. AI can also improve assistive technology, making the world (and our work) more accessible to those with disabilities. 

Pitfalls of AI

It’s important to note that the field of AI is largely unregulated at this time. However, with any technology, we have a moral obligation to consider its impact on other people when we decide to use it in our work, regardless of what is currently legal.

For example, the use of generative AI to create realistic images and videos—and even mimic the voices and visages of real people— has the potential to greatly disrupt public health messaging. The use of these technologies to spread misinformation is a very real concern for those of us in health-related fields, and may require us to develop more robust media and educational campaigns in the future. 

Similarly, LLMs like ChatGPT often return false information to queries (a recent study by Purdue University found that ChatGPT returns incorrect answers 52% of the time). Therefore, we don’t recommend depending on AI for research in place of reliable sources. 

We should also remember that machine learning comes with a human cost. Notably, in creating ChatGPT, OpenAI relied on low-paid data labelers in Nairobi, Kenya, to filter out disturbing text and images— at great cost to the workers’ mental health and wellbeing. The energy required to train and run AI models also has a significant impact on the environment and global power grids, as researchers at the University of Massachusetts found.

As exciting as AI's potential is, those of us working in public health and evaluation should keep in mind that our goal is to protect and improve our communities, not harm or exploit them.

Considerations when using AI 

When using AI in our work, there are some additional things to consider in order to maintain transparency and professional ethics: 

The first is to cite the command text and AI tools used, in the same way we'd cite the type of statistical test and data analysis software in reports. 

If using AI for data cleaning, coding, or analysis, we will need to take steps to ensure anonymity and privacy, such as de-identifying personal information by using participant codes before uploading data into the program. 

Finally, while AI could be used to mitigate human bias, be aware that the people who code and train AI models can include rules that will introduce bias to the programming. When using AI for tasks like translation and transcription, the final products will need to be reviewed by humans to ensure accuracy.

This is just the beginning of our conversation about AI in evaluation—there will be much more to come!

Share your thoughts!

Have you incorporated AI into your work? What benefits have you seen with AI? What questions or concerns do you have about the mainstream use of artificial intelligence? Please fill out our anonymous survey and share your thoughts! TCEC will use your responses to shape our future webinars and resources on this topic.

Primary Category

Secondary Categories

Data Collection

Tags