Applies to: DES employees, volunteers, and board and commission members.
Information contact: Enterprise Technology Services
Click HERE to submit a ticket to DES IT Support
Background
The use of Generative AI (e.g., Copilot, ChatGPT, Grammarly,) is rapidly expanding and has the potential to improve business efficiency. Below are some guidelines you need to follow when using Generative AI systems and internet tools at DES to set yourself up for success.
Definitions
Generative artificial intelligence (AI) is a technology that can create content, including text, images, audio, or video, when prompted by a user. Generative AI systems learn patterns and relationships from massive amounts of data, which enables them to generate new content that may be similar, but not identical, to the underlying training data. The systems generally require a user to submit prompts that guide the generation of new content.
Why do we need guidelines around using AI systems in the workplace?
- Generative AI technologies may reproduce biases or introduce new inaccuracies or “hallucinations” into the content they create that could harm the individuals or agencies (including DES) we serve.
- It’s important that we review content generated by AI for fairness, accuracy, and accessibility, especially when the content will be used outside the agency.
- Using a generative AI system may result in creating a public record under Washington state's Public Records Act.
- We need to ensure that we are preserving and protecting these public records appropriately. Contact the Agency Records Officer for assistance.
- State law already restricts the sharing of confidential information with unauthorized third parties. For state employees, RCW 42.52.050 (the state’s ethics law) specifically states: “No state officer or state employee may disclose confidential information to any person not entitled or authorized to receive the information.”
- We need to ensure that we are not inputting sensitive or confidential information into AI tools.
What should I do when using Generative AI systems?
Do verify that the content does not contain inaccurate or outdated information.
Do review the content for potentially harmful or offensive material.
Do read the entire output independently and review any summaries for biases.
Do rewrite documents in plain language for better accessibility and understandability if necessary.
Do disclose how material was reviewed or edited and by whom.
- Sample disclosure line: This memo was summarized by ChatGPT using the following prompt: “Summarize the following memo: (memo content).” The summary was edited by {Insert Names}.
Do ensure no copyrighted material is included in content generated by AI systems, which could inadvertently infringe upon existing copyrights.
What not to do when using Generative AI systems:
Don’t use personal email addresses to login to AI systems or other internet-based tools for DES business use.
Don’t access or use AI systems or internet-based tools until you have received authorization from ETS.
Don’t include sensitive or confidential information in the prompt.
Don’t integrate or incorporate any non-public information into AI. The use of this information could lead to unauthorized disclosures and legal liabilities.
I want to use an AI tool myself or request an AI tool for my team, what do I do?
Step 1:
- Before you use AI systems or other internet-based tools and software you must contact ETS Customer Support.
Step 2:
- Evaluate what you will be using AI for – is it a risky behavior? (See the AI Risk Matrix below.)
- If so, take extra precautions to ensure you are following the guidance here.
- If not, still follow the guidance above to make sure your use
- If you are requesting an AI tool for your program:
- Develop your own guidance for your team like that above around using AI, especially if you anticipate your staff will be using it for high-risk purposes. (See the AI Risk Matrix below.)
- Identify high risk potential uses in your program and ensure staff using the tool for AI purposes understand and follow your team’s AI guidelines.
AI Risk Matrix
When using AI for the following purposes, the level of risk should be considered. Please use the risk matrix below to determine if it’s a high-risk use case. If it is high risk, you’ll need to take extra care with the data you input and receive from the AI tool and we ask that you submit a ticket to ETS Customer Support.
Example of Risky behaviors:
- Making decisions that impact individual’s safety, health, or fundamental rights.
- Benefits determination, security plans, personnel investigations, contract language, contract solicitations, facility policy, instructions or equipment guidelines.
- Inputting personal data (SSN, Tax ID Numbers, birthdays, addresses, first and last names)
- This information will always have a high impact if it is misused or released
- External-facing documentation. Check for biases, incorrect or outdated information and accessibility.
- Important: Please check in with Communication team before posting any public content.
- Legal documents (contracts, building leases, property sales)
- Data Leaks: inputting or telling AI to access sensitive information (personnel, HR related, physical or IT security)
To use the risk matrix:
Find the intersection between impact and likelihood on the chart.
- Levels of impact and likelihood are defined in the table below the chart
- Low risk scenarios will be green
- Medium will be orange
- High risk uses or results will be red.
- Important – If your outcome results are red, please ensure you contact ETS Customer Support.
- Our ETS Security team will help with ensuring your program processes for using the AI systems keep your program and agency data safe.
Likelihood | ||||||
1 | 2 | 3 | 4 | 5 | ||
Impact |
5 | |||||
4 | ||||||
3 | ||||||
2 | ||||||
1 |
Potential Impact | Likelihood of Incident |
1 – Negligible. No foreseeable direct or indirect impact. | 1 – Remote or improbable. Very low chance of occurring. |
2 – Low. Any impact is very unlikely to impact health, safety, or fundamental rights of individuals OR the financial, reputational, or legal standing of the agency. | 2 – Unlikely. Low chance of occurring. |
3 – Moderate. Some impact to individuals that may include indirect impact to health, safety or fundamental rights OR the financial, reputational, or legal standing of the agency. |
3 – Possible. Moderate chance of occurring. |
4 – Significant. Major effect causing e substantial harm or disruption to health, safety or fundamental rights OR the financial, reputational, or legal standing of the agency. May include direct impact or indirect, systemic impacts. |
4 – Likely. High chance of occurring. |
5 – Severe or catastrophic. Extreme impact resulting in serious harm, injury or violation of fundamental rights OR the financial, reputational, or legal standing of the agency. |
5 – Probable. Very high chance of occurring. |