Chapter 5 Conclusion

Overall, my dissertation shows the promise and limits of using AI in public health. It’s not an automatic fix. Like any public health tool, it has its limits.

Chapter 2 covered supervised learning. It showed the value of fine-tuning AI for health communication tasks. You can build models for narrow goals, like assessing health literacy demands. That makes them easier to test. Open-source tools with clear documentation helped. They made the models easier to interpret and apply in public health settings.

Chapter 3 covered generative AI. It showed how much work it takes to evaluate AI tools over time. And why that matters. Tools like ChatGPT are unpredictable. They’re not trained for your health literacy tasks. So you need clear, narrow use cases. You can fine-tune them. But that shifts the process back toward what we saw in Chapter 2. More work. Less magic.

Chapter 4 introduced the 4-Factor framework to assess AI. It looks beyond performance. It adds flexibility, explainability, and fairness. These are key concerns in public health, which seeks to advance health equity and serve all communities.

5.1 Calls to Action

If you’re working in health communication and planning to use AI:

  • Apply AI to narrow tasks that you can track over time.
  • Hire health literacy experts to review AI outputs.
  • Weigh the long-term benefits of investing limited funds in community health workers instead.

5.2 Let’s Connect

As you might guess, I am online a lot. If you’re looking for more info on public health, media, and technology:

5.3 Acknowledgements

This site does not contain the full breadth of research methods and findings of my dissertation. I will share those in published journal articles. In the mean time, I would like to acknowledge my study coauthors. My dissertation committee: Karen Emmons, Sebastian Munoz-Najar, and K. Vish Viswanath. Research assistants on these papers: Zichao Li and Elissa Sherer.