What does privacy mean to you? 

International privacy laws like the General Data Protection Regulation in the European Union, national laws such as the Personal Information Protection Law in China, or state laws like the Delaware Personal Data Privacy Act create legal obligations for people and businesses who collect, process, sell, or distribute personal information in their respective jurisdictions. They also set up parameters for what constitutes privacy for everyday citizens. Many of us are blissfully unaware of the amount of our personal data that changes hands, and even the most conscientious of us may live in a geographic area where perceived privacy rights do not match what is protected by local legislation. The “Right to Privacy” debate coinciding with the hype surrounding emerging Artificial Intelligence technologies has regulators and companies rushing to create responsible frameworks for ensuring the security and privacy of sensitive data within the context of AI. Security is everyone’s responsibility, and with AI being integrated into many of our current workflows, we may have to adjust our tools and activities to protect the privacy of UD data as well as our own personal data.

data privacy

What does AI mean to you?

Artificial intelligence (AI), specifically Generative AI, has been making headlines for a bit over a year now. The American National Standard Dictionary of Information Technology (ANSDIT) defines artificial intelligence as “(1) A branch of computer science devoted to developing data processing systems that performs functions normally associated with human intelligence, such as reasoning, learning, and self-improvement.” This branch of computer science has an astounding number of applications and involves a number of different technologies spanning machine learning, natural language processing, generative AI, and more. Generative AI is a subset of the AI field that describes tools that respond to a request or an input from a user and generate a content response depending on the tool and the user’s request. These responses can be in the form of text, images, audio, and can get more complex as time goes on. For a deeper dive into the intricacies of Generative AI, refer to this Educause article.

Protect your personal privacy.

Most generative AI tools allow you to manage your personal privacy options. We’ve included links below to the privacy settings and policies for three of the biggest generative AI tools: OpenAI ChatGPT, Google Bard, and Microsoft CoPilot.

OpenAI ChatGPT:

https://chat.openai.com/#settings
https://privacy.openai.com/policies

Google Bard:

https://myactivity.google.com/product/bard

Microsoft CoPilot:

https://account.microsoft.com/privacy?lang=en-US

These settings may not work as intended with UD-managed accounts but should allow users to adjust their personal generative AI accounts. Modifiable privacy settings include the ability to delete prompts that have been associated with a user (similar to deleting personal browser history) and the option to refuse to store future prompts. Users can request that future inputs will not be used to train a generative model. Depending on geographic location, users may be able to request that your personal data be removed from the model itself. In most instances, any data you submit to the tool or service will stay available to the model, and runs the risk of being exposed. Microsoft and OpenAI have both reported breaches this past year. 

What do you agree to and what are you comfortable with sharing personally?

AI is data-hungry, and any service touting an AI application will come with a number of agreements and advertised policies pertinent to your privacy and the privacy of the data you’re working with. Many of these contractual agreements haven’t been fully finalized yet, are considered additional terms, and can come with a high “cost.” The Adobe Generative AI Additional Terms state (as of the publishing of this article): “If you do not currently have a paid subscription to Services and Software and submit a text-based Input (including any design settings, such as style) to a generative AI feature, you grant us a non-exclusive, perpetual, irrevocable, worldwide, royalty-free license to use, reproduce, distribute, modify, sublicense, create derivative works based on, publicly display, publicly perform, or translate both the submitted Input and any corresponding Outputs for any purpose.” 

You can easily give away ownership of your work and data. The policies that enumerate rights and responsibilities can be dry, but you may want to consider establishing your own personal privacy baseline to determine what data and rights you’re not willing to surrender. 

OpenAI Privacy Policy:

https://openai.com/policies/privacy-policy

Google Privacy Policy:

https://policies.google.com/privacy

Microsoft Privacy Statement:

https://privacy.microsoft.com/en-US/privacystatement

If it feels impossible to always read the fine print, you’re not alone. According to an NPR Morning Edition report, Jonathan Obar from York University and Anne Oeldorf-Hirsch from University of Connecticut conducted a study that required volunteers to sign up for a fictitious social networking site. Buried in the Terms of Service (TOS) for the site was a fairytale ‘Firstborn Payment’ clause that 98% of participants failed to notice. The researchers also found that every individual would require 40 minutes a day, every day, to read all of the TOS and privacy policies of all the services we utilize. Luckily, we have a contract review process for vendors that interact with UD data. 

What does the University agree to share with vendors and third parties?

The Procurement Contracting team partners with the Office of General Council, Risk Management, IT-PMO, IT-Security, and other stakeholders to protect the University in contract negotiations. Vendors and affiliates may be looking to develop their own AI models or reshape their products by linking them to generative AI tool sets. 

If you manage a relationship with a vendor and they would like you or your unit to “Accept new Terms of Service” to continue using a product, or if you’re working with a third party that handles sensitive University data and is attempting to deploy new AI functionality without requesting consent to include processing UD data, or you just have questions about the process, please reach out to procurement@udel.edu. Our Third Party Risk Management team is currently investigating a number of AI technologies, but limited data protections are hindering adoption. 

Many of our policies were written before generative AI burst onto the scene; however, our Information Security Policy, Information Classification Policy, and Data Governance Policy are all applicable and must be taken into account when using new technology. Although we may want to increase the efficiency of our workflows, the University often owns the data we work with, so we must avoid using sensitive UD data in unsecured generative AI, basing official decisions on generative AI outputs, or publicly releasing unconfirmed responses.

Tools to stay on the cutting edge of generative AI

If you’re involved in the process of creating or training AI models, here are a few tools that may help guide your privacy practices: 

Reach out to Information Security at secadmin@udel.edu and we will work with you to help secure and protect privacy in your environment!

With major challenges to generative AI coming from copyright lawsuits regarding ‘fair use’ doctrine, the future for AI is uncertain. Current AI services are opening up new opportunities, but the future business model of the technology may be forced to change to something currently unrecognizable. We must continue safeguarding the privacy and security of UD systems and data to complete our mission.