This site is intended for health professionals only


GPs warned against using AI for complaint responses

GPs warned against using AI for complaint responses

GPs should avoid using AI for complaint responses due to the risk of ‘inaccuracy’ or ‘insincerity’, a medical defence organisation has warned. 

The Medical Defence Union (MDU) has put out guidance this week which said that ‘some doctors are turning to artificial intelligence programs’ such as ChatGPT to ‘draft complaint responses for them’. 

But the MDU said this ‘raises issues of confidentiality’ and may also give patients the sense that it is a ‘false apology’.

There have been some cases where patients receiving an AI-generated response to their complaint were ‘suspicious of the wording’ and were ‘able to reproduce the same text’ via AI, according to the defence organisation.

MDU medico-legal adviser Dr Ellie Mein told doctors that it is ‘only natural’ they want to find ways to ‘work smarter’ in the face of ‘increased complaints and immense pressure’. 

But she warned that there is ‘no substitute for the human touch’ and that letters drafted mainly by AI ‘can undermine the authenticity of a genuine apology’ and are ‘not without risk to the clinician’. 

‘That’s not to say that AI can’t act as a prompt to get you started but it’s vital that patient complaints are responded to in a suitably authentic and reflective manner,’ Dr Mein added.

On confidentiality, the guidance said it is clear that including patient identifiable information in the prompt for a program like ChatGPT is ‘inappropriate’. 

The MDU also warned GPs that AI-generated letters could be inaccurate or use language from ‘their country of origin’, which is often the USA, rather than the UK.

Official NHS data showed that there were over 125,000 complaints within primary care in 2022/23, which was an increase of 5% on the previous year. 

Earlier this year, a digital health expert warned GPs that when using AI for clinical purposes they need to be aware of the liability risks and maintain their ability to ‘critically appraise’ the technology.

In May, LMC leaders voiced concerns about the developing use of AI in general practice, voting in favour of a motion that called for ‘appropriate controls’ on the technology.