The new study, published in BMJ Digital Health & AI, adds to a growing body of evidence into public views on health data sharing for AI research. It builds on that literature by placing public voices at its centre and offering a detailed analysis of how people weigh the risks, benefits and trust when deciding whether to share their health data.
Lead author Rachel Kuo, NIHR Doctoral Research Fellow said: 'AI is increasingly embedded in public consciousness, and there is rapid innovation in its use for healthcare. However, developing and testing AI requires access to large volumes of patient data, which raises concerns about confidentiality and security. Our aim was to understand how people think about sharing their data in the context of AI, and whether AI introduces particular fears or perceived benefits that shape those decisions.'
The researchers conducted eight online focus groups with 41 adults from across the UK, selected to reflect a range of ages, ethnicities, health experiences and socioeconomic backgrounds. Participants were invited to discuss realistic scenarios involving health data sharing for AI, including university-led research, large research databases, and projects involving commercial companies.
Three key themes were developed.
Perceived risks of health data sharing
Across the discussions, participants expressed cautious and conditional support for health data sharing. Anonymisation was widely seen as essential, but not foolproof, particularly for people with rare conditions or where large datasets are linked together. Many participants accepted that some level of risk was inevitable but wanted greater transparency about how data are protected and what would happen if things went wrong.
Trust varied depending on who was using the data. Universities and the NHS were generally seen as acting in the public interest, while the involvement of commercial organisations prompted greater scepticism. However, that view was lessened when commercial involvement could be clearly linked to patient benefit and subject to strict oversight.
Individual risk-benefit assessment
Participants made decisions about whether to share data by weighing perceived risks against potential benefits to themselves and others. Concerns about discrimination, misuse and future unknown risks were weighed against potential benefits such as improved care, faster diagnosis and helping future patients. Many described concern for the well-being of others and the 'greater good' as an important motivation, particularly those with long-term conditions or previous experience of benefiting from medical research.
Informed consent as a foundation for trust
Consent emerged as a central issue as a foundation for trust. Participants wanted information that was clear, specific and relevant to the particular study, and in an accessible format. They also emphasised the importance of the process of seeking consent, opposing requests made during stressful or emotionally vulnerable clinical moments. Suggestions included tailored approaches, opportunities to opt out of certain uses of data, 'cooling-off' periods, and the ability to withdraw consent at a later stage.
A strength of the study was that it was co-designed and carried out with PPI (patient and public involvement) contributors, who were instrumental in shaping the research questions, delivering the focus group interviews, and analysing the findings. This approach helped ensure that the study focused on issues that matter to the public, built confidence with the participants, and helped to draw out opinions that were key to the study rather than based on assumptions about what people ought to think.
Rosie Hill, a PPI co-producer for the study said: 'This is very important work that speaks directly with the public to understand the views that really matter. It is essential that we understand, in real time, how this area of technology and science is advancing. The themes developed in the study show the need for public engagement, to understand best practice and acceptability, in order to advance this important area.'
Another PPI co-producer Judi Smith said: 'The focus groups gave a fascinating insight to how people assess the risks and benefits of Artificial Intelligence in healthcare. You get a flavour of which organisations they would trust to access their data, and sometimes their reasoning. The comment one member made reflects how vital this research is, as the picture is complicated. She said that with her "person-hat" on, she had lots of reservations about giving up her data, especially to commercial companies, but with her "patient-hat" on she would gladly share her data, with almost anyone, if it sped up new treatment for her long-term condition.'
Rachel said: 'As systems increasingly rely on large-scale data to develop and evaluate AI, public trust can't be taken for granted. Our research shows that people are willing to support data sharing, but only under clear conditions. These include transparency about how data are used, strong governance, meaningful consent and demonstrable public benefit. Understanding these expectations will be essential if we want data-driven innovation in healthcare to be both ethical and sustainable.'