YOUR AD HERE »

Garfield County adopts first policy governing artificial intelligence use by employees 

Share this story

Garfield County commissioners on Monday unanimously approved the county’s first policy regulating how government employees use  artificial intelligence (AI).

First drafted in August, the policy was added to the Board of County Commissioners’ consent agenda this week. It establishes standards for all county employees — as well as contractors, third-party vendors, consultants and volunteers — who use AI tools on behalf of the county. It applies to activities such as predictive analytics, automation tools, chatbots and generative AI programs including ChatGPT and Microsoft 365 Copilot.

“The ultimate goal is to benefit our community and help make our services better and improve our processes,” said Gary Noffsinger, Garfield County’s IT manager, during a work session with county commissioners on Oct. 15. “We are just at the beginning stages as a county, but we look forward to a phased approach to policy, to experimentation and to learning.” 



According to the county’s IT department, benefits of AI include quicker response times, more efficient service delivery, improved infrastructure management and advanced data analysis. Automating routine tasks may also save the county money while reducing administrative burdens for staff.

AI tools are already integrated into software regularly used by county employees — including Microsoft and Adobe applications — and the county attorney’s and assessor’s offices are experimenting with AI for legal research and data analysis, according to county staff. Other departments are expected to begin pilot projects in 2026.




But the county’s IT department also warned of AI risks including algorithmic bias, job displacement, inaccurate information, overreliance on automation and potential data privacy breaches. The newly-adopted policy aims to ensure employees can optimize their work using AI while mitigating these risks.

“Department leaders have a responsibility to support right-sized (generative AI) uses that deliver the most significant public benefit,” the policy states. “This begins with ensuring that AI projects are problem-led, rather than technology-led, by centering on the specific needs and challenges faced by the department and the communities it serves.”

Key points 

Six key points in the policy outline general rules for AI use in county work: 

  • AI should only be used as a supportive tool.
  • Sensitive data must never be entered into public AI tools.
  • The use of AI must be disclosed.
  • AI-generated content must be reviewed before use.
  • All AI use must comply with laws and county guidelines.
  • Employees must consult with the IT Department and department heads before using new AI tools.

Employees are able to use AI tools to support services, optimize productivity, and assist with decision making, not to “replace human judgement or bypass established procedures,” the policy states. 

The policy also emphasizes transparency. Employees using AI must inform their managers, and sensitive information cannot be entered into public AI tools unless permission is given by a department head. When AI is used for public-facing or sensitive work, its use must be disclosed, notice should be given to any impacted individuals and the tool should be recorded in the county’s AI inventory, the policy states. 

AI-generated content must be reviewed for fairness, accuracy and inclusivity and edited, fact-checked and validated before use. The policy notes that AI tools can reflect bias in training data and aren’t always accurate — for example, generative AI models such as ChatGPT may “hallucinate” or make up information. 

Employees are explicitly prohibited from creating deepfakes — fake images or recordings — and fictional survey results, or relying on AI for legal or regulatory analysis. Even when AI tools are used, employees remain responsible for any content they use or share, the policy states. 

Data privacy and risk levels

The policy prohibits county employees from entering sensitive data — including medical records, legal files or protected resident information — into public AI systems. All AI use must comply with ethical guidelines such as privacy and public record laws. 

It also classifies AI use into low-, medium- and high-risk categories. Low-risk uses include drafting internal emails, creating meeting summaries or writing code. Medium- and high-risk uses include generating public content, interview questions or hiring materials, contributing to safety or regulatory documents and summarizing policy data.

County data classifications are established in the policy, from level one — public data including internal memos and public websites — to level three, confidential or sensitive data that includes protected health information, passwords and federal tax information. 

The policy defines which data levels can be entered into specific AI tools. For example, the policy states that for ChatGPT Business or Enterprise, level two data or below can be used, while protected health information can only be used in tools that have a Business Associate Agreement, such as Copilot Chat. 

Copilot Chat is already available to county employees as a “secure option for most Generative AI tasks,” the policy states. Use of AI tools not purchased or vetted by the county is strongly discouraged, according to the policy.

Content entered into or generated by AI may qualify as public record under the Colorado Open Records Act (CORA). The county therefore retains content in Copilot Chat for 30 days and 90 days for ChatGPT Enterprise. 

“While AI-generated content can appear authoritative and polished, it can be inaccurate, biased, or misleading. GenAI use can also heighten the risk of privacy breaches, unauthorized data sharing, and cybersecurity threats,” the policy states. “Furthermore, overreliance on GenAI for decisions that affect the public’s rights or safety can reduce transparency, weaken accountability, erode trust in government, and may be a violation of Colorado law.”

The IT Department will enforce the policy through regular audits and monitoring and violations may result in disciplinary action. The policy will be updated periodically to align with state, federal and industry standards, including the National Institute of Standards and Technology’s AI Risk Management Framework, the policy states. 

“It’s like every invention that’s ever come. There’s so much good that can come of it, but if evil, bad people control it for their means, then people are enslaved,” Mike Samson said on Oct. 15. “That’s what really sparked my (desire) to be here and hear what (Gary’s) got to say.

“The potentials are just unbelievable, but if they’re used in the wrong ways, watch out,” he added. “Mankind will never be the same.” 

Share this story