KI Robbe
WZB

Ageing Well in Digital Times

Trustworthy AI for the Healthcare Sector

Artificial intelligence (AI) is widely regarded as a promising development in the healthcare sector, with anticipated benefits such as improved diagnostic accuracy, earlier detection of health risks, and enhanced support for older adults through assistive technologies. Yet, the development and use of such technologies raises questions about justice, participation, and responsibility.

Each and every one of us is getting older—and this brings into focus the question of how the digital transformation will affect us in old age. The tension between the opportunities and risks of digital technologies is particularly evident among older people: although they potentially benefit from better diagnostic options and personalized medical treatment, they can also be particularly affected by digital exclusion mechanisms. Age-related barriers in the use of technologies and the lack of consideration of different user needs in algorithmic systems can make functionality and access to digital health services more difficult.

In the AGEAI research project at the WZB, we are investigating how older people perceive the use of AI in healthcare and which factors can influence their trust in these technologies. In a participatory workshop, we explored the perspectives of older people on various areas of applicationsuch as symptom checker apps for private use as well as decision-support systems in hospitalsin order to identify their experiences, concerns, hopes, and expectations.
 

What does trust in AI in healthcare mean for older people?

The German Ethics Council addresses the issue of the diffusion of responsibility in AI systems in its statement „Mensch und Maschine – Herausforderungen durch Künstliche Intelligenz“ (“Humans and Machines—Challenges of Artificial Intelligence”). This refers to the fact that decision-making processes are spread across many levelsfrom the development of AI to training with data and its actual application. This makes it difficult to clearly identify who is ultimately responsible for certain processes. The implementation of AI can also pose challenges and inequalities for certain social groups. As WZB researcher Justyna Stypińska shows, older people can be confronted with five different forms of exclusion: (1) age-related biases in algorithms and datasets (technical level), (2) stereotypical ideas and prejudices in development (individual level), (3) the invisibility of age in AI discourses (discourse level), (4) discriminatory effects of AI use on different age groups (group level) and (5) the exclusion of older people as users of AI technologies, services and products (usage level).

Older people are often insufficiently involved in the development and use of technical innovations. Andreas Bischof and Juliane Jarke highlight the significance of participatory approaches that genuinely involve older individuals and reflect the complexities of their everyday lives. At the same time, it is essential to critically reflect on the extent to which persistent stereotypes shape these processes, or whether there is an underlying assumption that technological solutions are sufficient to address complex social issues. In order to incorporate the perspectives of older adults into this discourse, we organized a participatory workshop with their involvement. It is, of course, essential to recognize that older adults do not constitute a homogeneous group—their experiences, perspectives, needs, and concerns are diverse and highly individual.

From the workshop, we were able to derive three factors that can strengthen trust in AI-based technologies in the healthcare sectorprovided that the systems function reliably in line with their application objectives. The first key factor is human supervision and control. The workshop participants’ statements make it clear that AI-supported systems in clinical settings are more likely to be trusted if medical staff continue to play a decisive role in diagnostics and treatment. This was not just a concern about technical errors or misdiagnoses, but also about the loss of personal closeness, which was seen as a prerequisite for trust in medical decisions. The presence of doctors and nurses should not only ensure the quality of AI-supported diagnostics, but also convey the feeling of being perceived as an individual. One participant emphasized: “This emotional side. This dedicated empathy [...]. I don't know whether future generations will be able to do without it, but I certainly wouldn't want to do without it.” [Translated from German]

Secondly, transparency and explainability were emphasized as a fundamental prerequisite for trust in AI-supported healthcare technologies. Lack of clarity led to skepticism, for example when it is not clear what data is being used, how AI systems fundamentally work, how they are used and who is responsible. The concern that AI-based decisions could be based on (age-) discriminatory biased or incorrect data sets and could have an unnoticed influence on medical care significantly impaired trust in such systems.

The third aspect that plays a central role in trust in AI systems is exposurei.e. previous experience and interaction with digital technologies. People who regularly come into contact with digital technologies and AI, either professionally or privately, were more open-minded, especially if their previous experiences were positive. The context in which the encounter takes place is particularly relevant here: The workshop participants emphasized that they would first like to come into contact with AI systems in everyday and lower-risk areas of application.
 

Designing trustworthy AI contexts

Our findings show that various contextual factors are also crucial for fair and trust-promoting technology integration. Ageism in digital healthcare should not be viewed as an isolated phenomenon; rather, it emerges at the intersection of multiple social inequalities. For instance, older adults who live alone and lack social support may face greater challenges in navigating digital technologies, while individuals with lower incomes are more likely to rely on services that are of limited quality or offer less privacy protection.

Language barriers and sensory impairments can further hinder access when digital health technologies are not designed with inclusivity in mind. These intersecting risks of discrimination underscore that ageism is not a uniform issue but manifests differently across various societal contexts. Addressing it therefore requires more than technological modifications; it necessitates comprehensive structural interventions aimed at dismantling broader social barriers.

Advancing transparency and explainability in AI systems can benefit from a socio-technical perspective, which frames transparency not solely as a technical attribute but as something that must be designed in a context-sensitive and user-centered manner. The team led by Bianca Schor proposes design guidelines for AI systems that enable meaningfully contextualized transparency tailored to the needs of medical professionals. Relevant information should be selectively provided based on the specific roles, knowledge levels, and requirements of the intended users. Transparency regarding the functioning and use of AI—adapted to different stakeholder groups, their situations, and informational needs, and delivered at appropriate points in time—can also play a crucial role in fostering trust among patients.

In addition to technology-related transparency, other aspects that promote trust were emphasized in the workshop. One example is the influence of geographical proximity. AI applications that are developed locallyin Berlin or Germany, where the participants themselves livewere perceived as more trustworthy. This suggests that physical proximity can strengthen the feeling of better accessibility to those responsible, or being located in a familiar regional or national context can create a sense of familiarity because of shared legal, cultural and institutional frameworks.
 

Conclusion

The future of age-friendly AI in healthcare relies less on technological advancement alone and more on how such technologies are embedded within broader socio-technical contexts, organizational settings, and social structures. AI should not be viewed as an isolated tool, but rather as an integrated element of a solidarity-based healthcare system that upholds dignity, participation, and social justice. Ageing well in the digital era requires more than innovation—it demands inclusive, reflective, and responsible approaches to the design and deployment of technologies that place human needs and values at their center.

 

Literature

Bischof, Andreas/Jarke, Juliane: „Configuring the Older Adult: How Age and Ageing are Re-Configured in Gerontechnology Design“. In: Alexander Peine/Barbara L. Marshall/Wendy Martin/Louis Neven (Ed.): Socio-Gerontechnology: Interdisciplinary Critical Studies of Ageing and Technology. London: Routledge 2021, p. 197–212.

Deutscher Ethikrat: Mensch und Maschine – Herausforderungen durch Künstliche Intelligenz. Stellungnahme, 2023. Online: https://www.ethikrat.org/fileadmin/Publikationen/Stellungnahmen/deutsch/stellungnahme-mensch-und-maschine.pdf

Schor, Bianca G. S./Kallina, Emma/Singh, Jatinder/Blackwell, Alan: „Meaningful Transparency for Clinicians: Operationalising HCXAI Research with Gynaecologists“. In: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), 2024, p. 1268–1281. DOI: 10.1145/3630106.3658971. 

Stypińska, Justyna. „AI Ageism: A Critical Roadmap for Studying Age Discrimination and Exclusion in Digitalized Societies“. In: AI & Society, 2023, 38, S. 665–677. DOI: 10.1007/s00146-022-01553-5.

Image: The section Age(ing) and Society of the German Sociological Association (DGS) held its spring conference at the WZB. The theme was "Inclusion and exclusion of older people in the age of artificial intelligence". Paro, a small robot in the shape of a baby seal, pictured here, was one of the participants.

26/3/2025

This text is licensed under a Creative Commons Attribution 4.0 International License.

Download this article as a PDF here.