A recent study by the University of Oxford revealed that the use of unregulated AI bots in social care could pose a potential risk to patient confidentiality. The study found that some UK care providers have been using generative AI chatbots to create care plans for people receiving care, which could inadvertently cause harm. Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, warned that the use of AI-generated care plans might be substandard and could reveal personal data to unauthorised parties. As such, the researchers suggest that the AI revolution in social care needs to be regulated by ethical standards to ensure the safety and confidentiality of patients and their data.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 5,750 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.