Doctors tap into AI despite lack of regulation

Doctor in consultation with patient taking notes. | Newsreel
A survey has found one in five doctors could be using AI in a clinical setting. | Photo: Aaron Amat (iStock)

As many as one in five doctors could be using artificial intelligence (AI) in their clinical practice, despite a lack of formal guidance.

An international study, based on responses from GPs in the United Kingdom, found a fifth of family doctors there had “readily incorporated AI into their clinical practice, despite a lack of any formal guidance or clear work policies on the use of these tools”.

The team of researchers from Sweden, Switzerland and the United States surveyed over 1000 UK GPs on their use of AI chatbots such as ChatGPT, BingAI, Google’s Bard.

They found more than a quarter used AI tools to generate documentation and a similar proportion said they used them to generate a differential diagnosis.

“A quarter also said they used the tools to suggest treatment options.”

The researchers said while this kind of survey may not be representative of all doctors, it showed doctors and medical trainees needed to be fully informed about the pros and cons of AI, particularly due to the risks of inaccuracies, biases, hallucinations and potential to compromise patient privacy.

“While these chatbots are increasingly the target of regulatory efforts, it remains unclear how the legislation will intersect in a practical way with these tools in clinical practice,” they said.

“‘(These tools) may also risk harm and undermine patient privacy since it is not clear how the internet companies behind generative AI use the information they gather.”

In Australia, Royal Australian College of General Practitioners (RACGP) Practice and Technology Management Expert Committee Chair Dr Rob Hosking said he was aware of GPs using AI to help with administration, but warned against going any further at this stage.

Dr Hosking said GPs shouldn’t be using these tools until they had been validated against quality, manually-used guidelines.

“If we get further down the track and the tools are validated against specific guidelines to provide clinical decision support, then that’s a different thing altogether.

“That would need to be validated with the TGA (Therapeutic Goods Administration) before we can use them that way.”

Dr Hosking said the way the technology worked currently was too broad to use for accurate clinical advice and posed too great a risk.

Read the UK-based study.