bc

How could AI be used to deliver public health activities?

How could AI be used to deliver public health activities?

  • Post author:
  • Post last modified:March 6, 2025
  • Post category:Science
  • Post comments:0 Comments
  • Reading time:11 mins read
Traditionally, public health surveillance relies on manual data collection and analysis, which can be time consuming and prone to errors. AI can transform this process by automating data analysis, quickly identifying potential outbreaks, and issuing timely warnings. For example, the US Centers for Disease Control and Prevention used AI to track the spread of COVID-19 during the pandemic by combining data from multiple sources, such as electronic health records, social media, and news outlets.
AI can also assist in monitoring trends in risk factors for non-communicable diseases by analysing demographic, behavioural, and environmental data and feeding these data into projections used in planning. AI’s ability to process large volumes of data rapidly can speed up the flow of information. For instance, AI can extract and analyse free-text data from sources such as death certificates to identify drug-related deaths well before formal coding processes are completed. This type of analysis can enable public health authorities to respond more effectively to emerging threats.
AI can be particularly useful in behavioural epidemiology, in which data from mobile apps and social media can be analysed to track health behaviours, such as diet, physical activity, and mobility. AI can also evaluate the impact of interventions designed to change these behaviours and model the trade-offs involved. These insights can then be linked to disease prevalence, providing a holistic understanding of the factors contributing to public health issues. Machine learning algorithms have been used to extract people’s sentiments and beliefs from social media interactions, an approach that has found several mental health applications. Another example comes from the field of environmental health, in which AI-powered tools use machine learning to monitor air quality in urban areas.
AI can also help optimise resource allocation. During the COVID-19 vaccination campaigns, AI models analysed demographic data, health records, and geographical information to establish the best locations for vaccination sites.
AI plays an increasingly important part in public health communication by improving the tailoring of messages to specific populations. AI tools can segment populations on the basis of demographic and behavioural data (eg, through the use of k-means clustering and lasso regression) to increase the likelihood that health messages are culturally appropriate and accessible. AI can also assist in crafting public health messages in multiple languages and at various health literacy levels and can help identify misinformation.
AI-driven chatbots offer a new means of communicating health-related messages. During the COVID-19 pandemic, WHO used AI-powered chatbots on platforms such as WhatsApp to provide real-time information on the virus, including guidance on symptoms, prevention measures, and vaccination. A recent review concluded that chatbots, by providing instant responses, can help to dispel misinformation and guide the public to reliable resources.
Perhaps the most straightforward application of AI is in automating routine tasks, such as generating standard letters or any task that entails summarising large amounts of information (eg, regulations, guidelines, or scientific reports) to produce concise summaries or recommendations. This type of application can substantially reduce the administrative burden on public health professionals, allowing them to focus on strategic tasks such as policy development and programme implementation.

Challenges in implementing AI in public health

Although AI’s potential in public health is considerable, there are notable challenges to its widespread adoption. One of the most important considerations is ensuring that AI is used equitably. AI models are often prone to bias, particularly if they are trained on non-representative datasets. This bias can exacerbate existing health disparities, particularly affecting marginalised and disadvantaged communities. AI systems must be developed with an equity lens that ensures diverse populations are adequately represented in training data. Developers and users must also be aware of issues such as dual valence (whereby a factor that serves as a marker of disadvantage or stigmatisation, such as a postcode, is included in the algorithm) and automation bias (whereby AI-generated decisions are privileged over the wishes of the individuals affected). Bias and equity are distinct concerns in algorithmic decision making, as bias pertains to the fairness of the prediction algorithm itself, ensuring that prediction errors are not systematically related to specific individual characteristics, whereas equity addresses justice in the allocation principles that govern how outcomes are distributed among individuals on the basis of broader ethical considerations. Preferences for what constitutes a so-called fair algorithm can vary substantially among stakeholders, who might value different fairness metrics, reflecting diverse ethical principles.
Individuals making use of AI must trust it sufficiently to use it but not so much that they do not challenge it when results appear to be wrong. This concern has stimulated the emergence of explainable AI, or XAI, in which algorithms explain how they have reached their decisions and what would be needed for them to reach different ones. Although attractive in theory, XAI is not a perfect solution because it does not fully eliminate the risks of false confirmation, when both humans and AI agree on an incorrect decision.
Data privacy is crucial when implementing AI in public health, especially as AI systems often rely on integrating data from multiple sources. Combining personal health data with other datasets increases the risk of reidentification and stigmatisation. Increasing reliance on these systems and the amount and scope of data they hold make them attractive targets for individuals seeking to extract ransoms or steal data. This threat calls for robust controls on data access and investment in cybersecurity.
When AI is entrusted to perform particular tasks, humans must ensure the technology has been developed and used appropriately. This assurance means having a so-called human in the loop, ensuring the active participation of users (ie, health professionals, patients, or citizens) at all stages from algorithm development, when humans are involved in designing, testing, and refining the model, ensuring it aligns with ethical and technical standards, onwards. At the implementation stage, the human role shifts to oversight, monitoring how the algorithm operates in real-world scenarios and avoiding unforeseen issues such as bias or errors. In day-to-day use, human involvement is required to ensure control, with AI acting as a supportive tool. Clear lines of accountability are also needed should anything go wrong.
Many public health institutions still rely on outdated health information systems that are not equipped to handle the large-scale data analysis that AI requires. Upgrading these systems and improving data-sharing mechanisms are essential steps for successfully integrating AI into public health. A 2023 survey on digital health in the WHO European region found that a little over half of countries reported having a unified interoperability strategy for secure information sharing across the health system, whereas only a third had a specific policy on using big data and advanced analytics in the health sector.

 

Leave a Reply