Associate Professor Sara Khalid co-leads the project with Dr Faisal Sultan from the Shaukat Khanum Memorial Cancer Hospital and Research Centre (SKMCH&RC) in Pakistan. Together they will pursue an innovative global health and development research project titled ‘Fair and safe medical AI: a South Asia case study to co-develop local agency & trust leaving no one behind.’
The Grand Challenges project will explore the use of large language models (LLM) – a class of artificial intelligence (AI) algorithms such as ChatGPT that mine text-based content – to unlock, summarise and present medical information quickly and easily for patients and healthcare providers. If successful it would boost clinical decision making and help close the health gaps for diverse populations and vulnerable groups including women and children in the Global South.
South Asia is home to a quarter of the world’s population. Healthcare systems are under unprecedented pressures and largely over-subscribed and under- resourced. Up to 80% of information required for time-critical decision-making is buried in full-text patient notes that might include key patient-specific information relating to family history, social, behavioural, or environmental determinants of health.
The team will use an existing state-of-the-art LLM applied to a large South Asian real-world database of cancer and multiple other conditions, including COVID-19 patient records, to see if it can be used to speed up and streamline the process of finding and prioritising relevant information from open text notes and patient history. If found to be feasible, accurate, safe and ethical the tools can be handed downstream to support medics across other countries or disease profiles, particularly in remote settings.
‘AI is increasingly being used in healthcare using structured data such as images and electronic health records to make diagnoses, pick up unidentified conditions, help prioritise information and create safety via reminders and assisted decision making. But unstructured free-text clinical notes are messy and modelling them is more tricky, even more so when done in real-world settings rather than in computer based models,’ said Sara. ‘Our ultimate aim is to see if we can help free up time for overworked and overburdened doctors and nurses and help shift time consuming tasks to allow for better and improved outcomes for a greater number of patients.’
Dr Faisal Sultan added: ‘While electronic hospital records have created terabytes of information, it is often a challenge to sift through the enormous amounts of data, especially open text descriptions and histories. Making sense out of this in a real time way is likely to be a great contribution of self learning or AI type systems.’
Sara’s project is one of nearly 50 Grand Challenges Catalyzing Equitable Artificial Intelligence (AI) Use grants announced by the Gates Foundation to support LMICs in harnessing AI’s power for good and to solve the urgent need for LMIC participation in the co-creation process of this technology as it rapidly evolves. The project’s findings will contribute to building an evidence base for testing AI large language models (LLMs) that can fill wide gaps in access and equitable use of these tools. Each of these grants represents an opportunity to solve or mitigate a real challenge experienced by communities, researchers and/or governments in low- and middle-income countries.
The Planetary Health Informatics Lab combines artificial intelligence and remote monitoring technology with international real-world health and environment data, in order to further our understanding of disease and fills the gaps in global health, leveraging common data models and federated network analytics. The group works closely with clinicians, engineers, epidemiologists, conservationists, data scientists, and public and patient groups in the UK, Europe, Latin America, South Asia, and Africa to co-create models for equitable and ethical solutions for planetary health problems.