Technology disruptions often arrive quickly and loudly, as has been the case with recent evolutions in artificial intelligence (AI).
The use of AI-powered chatbots and devices could reshape the way people live and work for decades to come. But with any new major technological shift also comes the possibility that some people will be left behind, which is exactly how older people may be feeling in the midst of the AI revolution.
This is according to a recent report from Kiplinger, which examined recent AI studies and details for how this technology is trained by workers and users.
One cited study from 2024, for instance, found that AI chatbots “frequently returned responses laced with age-related stereotypes, evidence that bias isn’t theoretical, but built into the behavior of the tools themselves.”
Evidence suggests that the data selected for use in AI training, which is being controlled by younger workers and users, could slowly be introducing age-related biases into the way these systems work. This could potentially leave older users out of the equation for utility and user experience.
Brittne Kakulla, a senior research adviser at senior advocacy group AARP, said that much of the potential disconnect with AI from older users comes from what its trainers look for when inputting data — as well as the styles of older people who might appreciate function over form.
“I think the biggest disconnect is in what’s prioritized in tech itself,” Kakulla sold Kiplinger. “Older adults prioritize function over flash, which can be counter to the tech industry’s obsession with speed and novelty.”
The question this raises is one of usefulness and relevance for older users, which is compounded by the well-documented aging of society at large across the developed world.
One of the questions posed by the Kiplinger piece revolves around long-term impacts on work and life through these apparent exclusions — particularly in industries like financial services and health care.
“Researchers and advocates say the stakes are high,” Kiplinger noted. “Excluding older adults from the design, testing and governance of AI could lead to systems that not only overlook their needs but also fall short for everyone.”
When older people are left out of the discussion, particularly with such a major technological development driving rapid public and private sector adoption, the results can be severely negative.
As pointed out by Kiplinger, a 2022 policy brief published by the World Health Organization said there is potential for amplified age discrimination for a cohort that would need the most assistance.
“[A]ge often determines who receives certain medical procedures or treatments,” the brief explained. “Any such systematic discrimination in the provision of health care can be reproduced in AI, which builds on historical data. In this way, AI algorithms can fix existing disparities in health care and systematically discriminate on a much larger scale than biased individuals.
“Health and medical data generated from other sources, including clinical trials, also tend to exclude or insufficiently represent older people in the data set.”
Butthe explosive growth in AI adoption over the past several years could suggest that the technology is “becoming more relevant to older adults,” according to Kakulla.
“Tech companies need to account for diversity across the entire lifestage, and design for the lifestage,” she said.
Comments