Oxford study reveals tech’s healthcare impact, ethical concerns
Researchers emphasize ethical AI use in social care.
A University of Oxford pilot study revealed that some care providers are utilizing generative AI chatbots like ChatGPT and Bard to formulate care plans. However, Dr. Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, warns that this poses a potential risk to patient confidentiality.
Green cautioned that carers acting on flawed or biased data could inadvertently harm patients, potentially resulting in substandard AI-generated care plans.
Despite these risks, Green acknowledged potential benefits of AI in social care, such as easing administrative burdens and enabling more frequent care plan reviews. While she currently does not advocate for this approach, she noted ongoing efforts to develop apps and websites for this purpose.
Health and care organizations are already leveraging technology based on large language models. For instance, PainChek utilizes AI-trained facial recognition through a phone app to assess pain in individuals unable to communicate verbally, detecting subtle muscle movements. Oxevision, used by half of NHS mental health trusts, employs infrared cameras in seclusion rooms to monitor patients with severe dementia or acute psychiatric needs, assessing fall risks, sleep patterns, and activity levels.
In earlier development stages, Sentai is a care monitoring system that utilizes Amazon’s Alexa speakers. It serves individuals without 24-hour carers by reminding them to take medication and enabling remote check-ins by relatives.
Additionally, the Bristol Robotics Lab is working on a device for individuals with memory issues. This device integrates detectors that can turn off the gas supply if a hob is accidentally left on. George MacGinnis, the challenge director for healthy aging at Innovate UK, provided this information.
In the past, ensuring safety would require a visit from a gas engineer,” MacGinnis explained. “However, Bristol is collaborating with disability charities to develop a system that allows individuals to do this themselves safely.
“We’ve also supported a circadian lighting system that adjusts to individuals, aiding them in restoring their circadian rhythm, which is often disrupted in dementia.
While those in creative fields fear AI replacing them, the social care sector faces a different challenge. With about 1.6 million workers and 152,000 job vacancies, along with 5.7 million unpaid carers looking after loved ones, the demand for care is high.
“People tend to view AI in a binary manner – either it replaces a worker or things remain unchanged,” explained Lionel Tarassenko, professor of engineering science and president of Reuben College, Oxford. “However, the reality is different. AI can uplift individuals with limited experience, bringing them to the level of those with extensive expertise.
“I personally experienced this when caring for my father, who recently passed away at 88 with dementia. We had a live-in carer, but when my sister and I took over on weekends, we realized we lacked the same skills as the trained carer. AI tools could have helped us provide a similar level of care.”
Yet, certain care managers are apprehensive about AI technology, fearing it could lead to inadvertent rule violations and result in losing their licenses. Mark Topps, a social care professional and co-host of The Caring View podcast, expressed concerns among social care workers that adopting technology might inadvertently lead to breaching Care Quality Commission regulations, jeopardizing their registration.
He noted that many organizations are hesitant to take action until the regulator issues guidance, fearing the repercussions of making a mistake.
Last month, 30 social care organizations, including the National Care Association, Skills for Care, Adass, and Scottish Care, gathered at Reuben College to discuss responsible use of generative AI. Green, who led the meeting, stated that they aimed to develop a good practice guide within six months, intending to collaborate with the CQC and the Department for Health and Social Care.
“We aim to establish guidelines that the DHSC can enforce, clearly defining what responsible use of generative AI in social care entails,” she explained.