Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Medical: Professional obligations & AI in healthcare.

By Bill Madden on September 9, 2024
Email this postTweet this postLike this postShare this post on LinkedIn

Ahpra has published guidance which explains how existing responsibilities in National Boards’ codes of conduct apply when practitioners use AI in their practice.

Some AI tools used in healthcare are regulated by the Therapeutics Goods Administration (TGA). The TGA regulates therapeutic goods that meet the definition of a medical device, which includes software (including AI-enabled software) if it has a therapeutic use and meets the definition.

Generative AI tools used in clinical practice such as AI scribing are usually intended for a general purpose and do not have a therapeutic use or meet the definition of a medical device, and therefore are not regulated by the TGA.

The guidance identifies key principles as follows:

Accountability

Regardless of what technology is used in providing healthcare the practitioner remains responsible for delivering safe and quality care and for ensuring their own practice meets the professional obligations set out in their Code of Conduct. Practitioners must apply human judgment to any output of AI. TGA approval of a tool does not change a practitioner’s responsibility to apply human oversight and judgment to their use of AI, and all tools/software should be tested by the user/organisation to ensure they are fit-for-purpose prior to its use in clinical practice. If using an AI scribing tool, the practitioner is responsible for checking the accuracy and relevance of records created using generative AI.

Understanding

Health practitioners using AI in their practice need to understand enough about the AI tool to use it safely and in a way that meets their professional obligations. At a minimum, the practitioner should review the product information about an AI tool including how it’s trained and tested on populations, intended use, and limitations and clinical contexts where it should not be used. Understanding the ‘intended use’ of an AI tool is particularly important, as this will inform a practitioner’s consideration of when it is appropriate to use the content /imaging generated by the AI and the associated risks and limitations including diagnostic accuracy, data privacy, and ethical considerations. It is also important to understand how the data is being used to retrain the AI, where data is located and how it is stored.

Transparency

Health practitioners should inform patients and clients about their use of AI and consider any concerns raised. The level of information a health practitioner needs to provide will depend on how and when AI is being used. For example, if AI is being used as part of software to improve the accuracy of interpreting diagnostic images, the practitioner would not be expected to provide technical detail about how the software works. However, if a practitioner is using an AI tool to record consultations, they would need to provide more information about how the AI works and may impact the patient in terms of its collection and use of their personal information (for example, if public generative AI software is used personal information becomes public domain).

Informed consent

Health practitioners need to involve patients in the decision to use AI tools that require input of their personal patient data and if a patient’s data is required for care (i.e via a recommended diagnostic device). Make sure you obtain informed consent from your patient, and ideally note the patient’s response in the health record. If using an AI scribing tool that uses generative AI, this will generally require input of personal data and therefore require informed consent from your patient/client. Informed consent is particularly important in AI models that record private conversations (consultations) as there may be criminal implications if consent is not obtained before recording, and the AI transcription software should include an explicit consent requirement as an initial step before proceeding.

Ethical and legal issues

Other professional obligations in each Board’s Code of Conduct or equivalent that are relevant to the use of AI in practice include:

  • – ensuring confidentiality and privacy of your patient/client as required by privacy and health record legislation (see below), by checking that data is collected, stored used and disclosed in accordance with legal requirements, and your patient’s privacy is not inadvertently breached. Practitioners need to be aware of whether the patient data being used/recorded is also used to train the AI model for future patients, and whether identifiable patient data then finds its way into that learning database.
  • – supporting the health and safety of Aboriginal and Torres Strait Islander people and all patients/clients from diverse backgrounds by ensuring you understand the inherent bias that can exist within data and algorithms used in AI applications and only using them when appropriate
  • – complying with any relevant legislation and/or regulatory requirements that relate to using AI in practice, including the requirements of the TGA and your state and/or territory governments
  • – awareness of the governance arrangements established by your employer, hospital or practice to oversee the implementation, use and monitoring of AI to ensure ongoing safety and performance, including your role and responsibilities, and
  • – holding appropriate professional indemnity insurance arrangements for all aspects of your practice and consulting your provider if you’re unsure if AI tools used in your practice are covered.

[BillMaddensWordpress #2292]

  • Posted in:
    Health Care
  • Blog:
    Bill Madden's Blog
  • Organization:
    Bill Madden
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo