Publicly available generative AI refers to AI tools that are accessible to the general public and not procured through PCC-approved channels. These tools do not adhere to PCC privacy, security, or compliance requirements.

This definition does not include PCC-licensed AI instances that have undergone procurement and risk assessment processes, ensuring compliance with PCC policies related to privacy, security, and data protection.

This guidance specifically addresses publicly available generative AI due to its potential risks. When third-party or internal college information is entered into such tools (e.g., ChatGPT), it may become part of the AI model and could be shared with others who ask related questions, increasing the risk of data leakage.

Before purchasing or acquiring any AI-related tools or software—especially those involving College resources or data—please consult the Information Technology division. This includes free tools. The IT team will evaluate the product, review contract terms for potential risks, and ensure it aligns with College policies. We may also guide you to approved options, helping to avoid unnecessary or duplicate costs.

The ethical and responsible use of publicly available generative AI must align with PCC’s policies, mission, and goals. The following guidelines emphasize a human-centered approach, ensuring AI is used in ways that benefit PCC and the broader community. All publicly available generative AI tools and use cases must be assessed to ensure they meet the standards of trustworthy AI.

  • Never enter personally identifiable or confidential information into publicly available generative AI tools.
  • Seek supervisory and security review and obtain written approval before entering any code into or using code generated by a publicly available generative AI tool.
  • Review, revise, test, and independently fact-check any output produced by publicly available generative AI to ensure it meets PCC’s standards for quality and accuracy. AI tools are not always reliable.
  • Clearly identify when content has been substantially drafted using publicly available generative AI.
  • Check and configure privacy and security settings of the publicly available generative AI tool before use.
  • For high-risk use cases, disable chat history and opt out of providing conversation history as data for training AI models before use.
  • Understand that using publicly available generative AI carries risks and take proactive steps to mitigate them whenever possible.

Training & Awareness

The Essential Guide to AI for Educators course offers approximately two hours of training on effectively using ChatGPT. Key topics include prompt engineering, ethical considerations, and practical applications of ChatGPT in education. Participants will receive a certificate of completion upon finishing the course.

Procurement

Before purchasing or acquiring any AI-related tools or software—especially those involving College resources or data—please consult the Information Technology division. This includes free tools. The IT team will evaluate the product, review contract terms for potential risks, and ensure it aligns with College policies. We may also guide you to approved options, helping to avoid unnecessary or duplicate costs.

Security and Risk Assessment for Publicly Available Generative AI

  • Conduct security assessments of all publicly available generative AI tools before use to ensure system safety, reliability, and to understand how data is used, stored, and destroyed. Use the  NIST AI Risk Management Framework (AI RMF) to assess and manage risks to individuals, organizations, and society associated with AI.
  • Regularly re-assess the tool or AI functionality, at least annually or whenever there is a major release or functional change to the AI-enabled product.
  • Evaluate and approve legal risks associated with the terms and conditions governing the license or use of publicly available generative AI tools. These terms may be legally binding and could impose obligations on employees or the college.
  • Document the use of publicly available generative AI at the college or department level to maintain accountability and ensure AI use aligns with the public good, without involving personally identifiable information (PII) or other sensitive data.
  • Track and share successful AI use cases to promote responsible AI adoption within the college and among other institutions that could benefit from its implementation.

Use of Publicly Available AI-Generated Visual, Audio & Video Content

• Seek approval from the college’s Director of Marketing and Communications/PIO before using or publishing visual, audio, or video content generated by publicly available generative AI.

• Review and vet AI-generated content to identify and address any potential bias, offensive material, or inaccuraciesbefore publication.

• Ensure compliance with all state and federal laws (e.g., privacy, data protection, copyright, intellectual property) and data protection standards before publishing any content produced by publicly available generative AI.

Publicly Available Generative AI & Public Records

Information provided to a publicly available generative AI tool is considered “released to the public” and may be subject to public records requests under the Public Records Act (PRA). 

Releasing information that does not have a public information classification may violate privacy or data protection requirements and laws. 

Risks & Limitations of Publicly Available Generative AI

While AI offers tremendous potential benefits to higher education and society at large, all uses of generative AI come with some risk. Users must follow the AI Framework when evaluating the risks of using publicly available generative AI on college equipment and/or for college business.

As with all content produced by the college, content created for publication using publicly available generative AI requires thorough review before use. Special care should be taken when the output has the potential to impact students, faculty, staff, or the community’s exercise of rights, opportunities, or access to critical resources or services administered by or accessed through the college. This protection applies regardless of the changing role of automated systems in higher education.

When using publicly available generative AI, keep in mind that:

  • Publicly available generative AI should be evaluated for accuracy.
  • Content produced by publicly available generative AI tools may be inaccurate or unverifiable. For example: AI-generated hallucinations (made-up content)
  • Incorrect context (data pulled from the internet may not be representative of the college – e.g., policies from private industry vs. public institutions, federal vs. state guidelines)
  • Citing non-existent sources
  • Publicly available generative AI models and algorithms are often proprietary, meaning end-users may not have insight into how they were created or function.
  • It may not be possible to determine how the model was trained and evaluated for bias.
  • Results may be based on datasets that contain errors and may be historically biased across race, sex, gender identity, ability, and other factors.
  • Publicly available generative AI tools may not comply with state and federal laws and requirements designed to ensure the confidentiality of sensitive information.
  • A security risk assessment by the college is required before uploading any information.
  • Use of publicly available generative AI may require employees to accept legal terms and conditions governing the license and/or use of the tool, which may be enforceable against the employee or the college.
  • Publicly available generative AI may create content that infringes on others’ intellectual property (e.g., patents, copyrights, trademarks).
  • Entering information into a publicly available generative AI tool is equivalent to releasing it publicly.
  • Releasing information that is not public may violate privacy or data protection requirements and laws.
  • Using generative AI to generate software code could expose existing vulnerabilities and create new ones if systems are not kept current with patches and software updates.
  • AI systems and related services rely on computing technology and networks that must be secured against unauthorized access and manipulation to ensure the integrity of systems and data.
  • Uploading or sharing any personal, proprietary, or restricted data into publicly available generative AI tools is strictly prohibited. This includes proprietary code, personal information, and security-sensitive information.
  • This guidance extends to all Personally Identifiable Information (PII) associated with students, faculty, staff, or partners and includes educational, financial, and health records, trade secrets, and any other sensitive information entrusted to the college.

By adhering to these guidelines, the college can responsibly navigate the evolving landscape of generative AI while protecting its community and data integrity.