The AI Guidance document for the University, “Principles for Use of Generative Artificial Intelligence Tools and Services” is under review and will be released soon.

PC Shea, the University’s Privacy Officer, has facilitated and coordinated this effort at UD, and will soon be publishing this guidance for the University community to understand the substantial amount of risk generative AI poses. This document presents a University-wide strategy for securely adopting new technology and prompts users for consideration of how they use University data.

The Ubiquity of AI

In the “Principles of Use,” generative AI is defined as: “any machine-based tool designed to consider user questions, prompts, and other inputs (e.g., text, images, videos) to generate a human-like output (e.g., a response to a question, a written document, software code, or a product design).” This includes tools like ChatGPT and Microsoft CoPilot, but the designation can also cover tools that are being developed at UD, or a service that your team consistently uses that starts adding generative features. The scope of the “Principles” applies to all Generative AI use by faculty, staff, students, affiliates, and other University stakeholders, so it’s worth reading and sharing. Our purchased tools and services go through the Technology Request process and later the contract renewal process where the security and privacy practices of our partners are scrutinized. If you’re a relationship manager with a third party that has recently added an AI component to their tools or workflows, please email it-pmo@udel.edu and we can include it into our documentation. 

Current Policy

The University of Delaware has a number of policies that currently cover the use of Generative AI. The most recognizable include the Information Classification Policy, the Data Governance Policy, and the Information Security Policy. This list is by no means exhaustive, and Generative AI can also exist in the IT realm and fall under Academic, Student Life, or Research policies. These tools are novel in some ways, but they often operate like old tools—put data in, get information out. Generative AI tools are often blackbox and proprietary, so we can’t be certain how the tools we purchase make their decisions. Even if we develop our own AI tools, the technology is so new that interpretability and explainability aren’t entirely reliable yet. AI models can very confidently give completely fabricated information or even “hallucinate,” which is to give false or misleading information presented as fact. This policy is meant to be broad, and these “Principles for Use” are intended to help us highlight best practices and reduce the risk of using specific new technology.

Businesses and universities across the world are looking to address how AI is used in their environments, mitigate risk, and keep their business and customer’s data safe. Higher ed is always unique because we have a mandate to foster the free exchange of ideas while protecting the privacy and security of targeted information. To address these sometimes conflicting goals, this document provides guidance on an operational level for all of us. 

The “Principles for Use” make a clear distinction between Publicly Available Generative AI and UD-offered Generative AI. Any AI tools offered by UD will have security configurations and contractual protections built in, while many publicly available Generative AI tools have ever-changing security configurations, data protections, and click-through privacy agreements. PC provides us with a list of sensitive types of data and describes the steps we must take to secure them. Output review requirements in the “Principles for Use” will keep us from accidentally sharing erroneous information, from facilitating biased outcomes in our work, and from publishing malicious or illegal content. The document reaffirms our commitment to academic integrity and furnishes us with the guidance from the Center for Teaching & Assessment of Learning. There we can find the AI approaches presented to educators: use prohibited, use only with prior permission, use only with acknowledgement, and use is freely permitted with no acknowledgement in classrooms. These too are part of a larger conversation about the ethics of use, shortcomings of technology, and potential harms of the use of Generative AI in higher education.

Reporting 

If you witness AI-powered tools at UD behave erratically or experience security failures, or if you suspect malicious activity or find deepfakes targeting or emulating people in the UD community, please report it via the Report a Security Event form. 

Questions?

Please reach out to PC Shea if you have questions about the new AI Guidance or privacy policy at the University. Email IT-Information Security if you have questions about assessing risk from a Generative AI tool, configuring AI tools, setting up secure procedures, or general security inquiries. 

Thank you for helping us Secure UD!