Use of Generative AI and Machine Learning Applications
Generative AI or Machine Learning systems must support goals and protect the Safety, Confidentiality, Integrity, and Availability of all Data and Information Systems associated with the Generative AI or Machine Learning system.
Reid Health is committed to promoting the secure, legal, and ethical use of its Data and Information Systems, including in relationship to Generative AI and Machine Learning systems, which are commonly used to process large quantities of data and either make decisions or generate communications based on that data.
Reid Health has developed guidelines and key considerations for workplace use of artificial intelligence models such as ChatGPT and other public domain AI tools. By adhering to these guidelines, we can harness the benefits and opportunities presented by AI, while upholding our commitments to high quality patient care, ethical practices, patient privacy and data security.
Reid Health and its workforce members adopt the following policy and procedure.
Availability - Ensuring that Data and Information Systems are ready for use when they are needed; often expressed as the percentage of time that a system can be used for productive work.
Confidentiality - The protection from unauthorized disclosure of any patient, corporate, provider and personnel information that is deemed sensitive. This information will be revealed only on a business need or need-to-know basis as authorized through approved procedures.
Data - Any information in any medium or form which has value to the company and which must be protected.
Generative AI - An artificial intelligence (AI) system capable of generating text, images, or other media in response to prompts supplied by a system user or information system. These systems use generative models such as large language models to statistically sample new data based on the training data set that was used to create them. Current common examples for Generative AI are ChatGPT, Google Bard, Stable Diffusion, and DALL-E.
Information Owner - An individual who has responsibility for controlling the production, development, change, maintenance, access, use, resiliency, and security of the Data or Information System component.
Information System - The hardware, operating system, application, database, and
infrastructure, including cloud-based services, that transmits, processes, or stores data. This
includes mobile devices.
Integrity - Obtaining and maintaining patient, corporate, provider and personnel information accurately, timely, and in conformance with established policies, standards and procedures as applicable, while protecting the information from unauthorized alterations or deletions.
Large Language Model - A language model consisting of a neural network with many parameters, trained on large quantities of unlabeled text using self-supervised learning.
Least Privilege - The security principle which states that access to a resource shall be limited to the privileges needed to complete assigned tasks.
Machine Learning - A branch of artificial intelligence that teaches computers to learn from data and improve with experience, without being explicitly programmed.
Natural Language Processing - An interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data.
Safety - A measurement of an information system's ability to avoid unwanted harm, loss, injury, or damage.
Workforce Member - Any
- Full- or part-time employee, temporary employee, volunteer, intern, student, business associate, contractor, consultant, vendor, or any other third party that provides services on behalf of Reid Health.
- All members of the Board of Directors, officers, and managers of Reid Health and its business units.
- All medical staff and allied health professionals of Reid Health and all Reid Health agents, including independent contractors providing health care services, physicians, and physician practices (including third party billing and practice management companies).
Reid Health Guidance for the use of Artificial Intelligence (AI):
- Accuracy and Reliability - AI tools can be wrong. They rely on sources with varying degrees of quality and reliability. Responses are generated based on patterns and probabilities rather than personal knowledge or critical thinking. It is crucial to verify the accuracy and reliability of the information provided by these tools before making decisions, taking action or sharing or publishing information.
- Patient Privacy and Data Security - When using any public domain AI tool, it is essential to protect patient privacy and data security. Never enter any personally identifiable information (PII) or protected health information (PHI) into these platforms. Immediately report any concerns about the handling of sensitive information, or a suspected security breach, by contacting ASKIT.
- Ethical Use - Use all AI tools responsibly and ethically. Biases in AI models' training data can result in biased outputs. Avoid generating or disseminating content that is discriminatory, offensive or violates patient confidentiality or privacy regulations.
- Recognizing Limitations - AI tools may struggle with understanding nuanced or complex information, or medical scenarios. They are not a substitute for professional expertise or clinical judgment. Do not use public domain AI tools to support decisions related to direct patient care.
- Continuous Learning and Improvement - As a health care organization, we encourage employees to engage in continuous learning and improvement. AI can serve as a general learning tool, but it does not replace professional development and formal training related to your role and workplace responsibilities.
- Reporting - Promptly report any potential misuse of AI to our IT support team by contacting AskIT.
Generative AI and Machine Learning systems carry with them risk to the cybersecurity of Data and Information Systems:
- Safety - Because artificial intelligence systems prioritize scale and output over safety, such systems may reach an unsafe state due to inaccurate information, incomplete information, or misuse.
- Confidentiality - Information entered into Generative AI and Machine Learning systems may enter the public domain. This can release non-public information and breach regulatory requirements, customer or vendor contracts, or compromise trade secrets.
- Integrity - Generative AI and Machine Learning systems relies upon algorithms to generate content and make decisions. There is a risk that such systems may generate inaccurate or unreliable information.
- Bias and Discrimination - Because of the learning algorithms, Generative AI and Machine Learning systems have the potential to:
- Generative AI and Machine Learning systems must not be used where harm to a patient could result from a decision of such systems unless all decisions are actively confirmed by a qualified Workforce Member.
- An information security risk assessment as defined in the Reid Third Party Risk Management Policy must be completed for all use of Generative AI and Machine Learning systems before the system integrates with any other Reid Information Systems, Data or business processes.
- The Application Owner for any Generative AI and Machine Learning system is accountable for the decisions made by such systems and their outcomes.
Sensitive data may not be stored, transmitted, or processed by a Generative AI or Machine Learning system unless at least one of the following is true:
a. The system as installed is fully contained within the Reid Health network environment.
b. The operator of the system has an open contract with Reid Health which includes cybersecurity and nondisclosure requirements (including a Business Associate Agreement for integration with PHI data or health systems).
Generative AI and Machine Learning systems may be used to generate content and code based on the sensitivity level of the Information Systems for which the content or code is created:
a. Public - Content and presentation-level code (not governing user interactions) may be freely used; application logic code must be reviewed by a qualified Workforce Member before promoting to production. Human reviews are recommended for all generated content at least weekly.
b. Internal Use Only - Content must be reviewed daily by a human. Code must be reviewed by a qualified Workforce Member before promoting to production.
c. Confidential - Content and code must be reviewed by a qualified Workforce Member before it is posted to production.
Generative AI may be used to process a variety of data types, including medical images, clinical notes, and genetic data. Generative AI may be used for a variety of purposes in healthcare, including:
a. Creating new medical images, such as X-rays or MRIs.
b. Generating personalized treatment plans based on a patient's medical history and other factors.
c. Developing new drugs and therapies.
d. Conducting medical research.
e. Draft a reply to patient message.
To ensure patient privacy and confidentiality, the following safeguards will be in place:
a. All data used by Generative AI will be de-identified or pseudonymized.
b. Only authorized personnel will have access to the data.
c. All data will be stored securely.
d. Patients will have the right to access and control their data. Patients should understand the potential risks and benefits of Generative AI before giving their consent.
All use of Generative AI and Machine Learning systems must adhere to all applicable laws, regulations, and company policies, including and especially copyright laws.
Content produced by use of a Generative AI system must be labeled or footnoted as containing information generated by an AI system.
The principle of least privilege must be used in any interface with Reid Data and/or Information Systems to restrict Generative AI and Machine Learning systems to only the data and functions necessary to perform the desired operations.
Generative AI and Machine Learning systems used at Reid Health must exhibit transparency and explain ability. Workforce Members interacting with the system must be able to understand how AI decisions are made and why specific outcomes are generated.
All Generative AI systems must undergo a review at least annually to ensure that they are functioning as intended and are not causing unintended consequences.
Cyber Security Policy
Computer Use Policy
Third Party Risk Management Policy
VII. Approval Process
It is the responsibility of the Chief Information Officer to facilitate compliance with these procedures.
Any exceptions to this policy must be approved by the Chief Information Officer; in Protected
Health Information is involved, the Director of Compliance and Privacy Officer must also approve
the exception. All policy exceptions shall require a Risk Acceptance which is reviewed annually.