Continue reading

Enjoy unlimited access to the Bulletin as a member or fellow.

Artificial intelligence's impact on patient information during the perioperative period

Recent data suggests that ChatGPT has 200 million active users a week. In a public online survey, 78.4% of ChatGPT users reported they would use it for self-diagnosis. This leads to many unanswered ethical and legal questions, such as who is responsible should poor advice from AI lead to patient harm.

Generative artificial intelligence (AI) describes technology that can create new content, including text, images and audio, based on patterns and structures learnt from existing data. Large language models (LLM) are types of generative AI models that are trained on vast amounts of online data and employ natural language processing, designed to mimic human language and communication. 

Since the release of ChatGPT 3.5 by OpenAI in November 2022, there has been a significant rise in interest in and development of LLM chatbot technology, which has become increasingly sophisticated. Now other companies, such as Google, have developed LLM AI technology integrated into search engines via plug-ins.

ChatGPT, and other AI chatbots, have not been designed for (or licensed to) provide medical information and advice. Despite ChatGPT usage policies dictating that medical and health advice without review by a qualified professional may significantly impair safety and wellbeing, the policy is not prohibitory. Therefore there’s increasing concern regarding the unregulated ‘off licence’ use by members of the public.