Ageism in AI: new forms of age discrimination and exclusion in the era of algorithms and artificial intelligence (AGEAI)

Abstract

The deployment of artificial intelligence (AI) holds great promise for the economy and society. However, it also brings forth risks concerning social equality, privacy, fairness, and accountability. Recent studies have shed light on significant race and gender biases in algorythmic systems, revealing how AI can negatively impact marginalized groups. Surprisingly, the category of age, crucial for social inclusion and equality in aging societies, has been largely overlooked in research and policy on AI bias.

The COVID-19 pandemic has further exposed how older groups endure disproportionate social exclusion due to limited access to digital resources and literacy. Despite this evidence, the role of AI systems in exacerbating these inequalities has only recently come under scrutiny. The World Health Organisation (WHO) has expressed concerns that unchecked AI technologies may perpetuate existing ageism in society, compromising the quality of health and social care for older people (WHO, 2022). Yet, the extent and various forms of ageism in AI remain largely unexplored, making it a relatively uncharted territory.

To bridge this gap, our interdisciplinary study aims to delve into the subject of ageism in AI, seeking to understand its implications and potential consequences for older individuals and the broader society. The aim of the AGEAI project is to critically assess how ageism operates in AI systems, products, services, and infrastructure by focusing on critical areas of AI deployment (healthcare, employment/hiring systems, mobility and transport, financial services, face recognition). The AI technology developed in those areas has been recognized as “high-risk” by the proposed EU AI Act (2022) and will need to be rigorously scrutinized to meet the standards of trustworthy, human centered and fair AI.

Main content

Relevant Publications