Skip to main content

Use of Generative AI for Administrative Purposes at WVU

In the evolving landscape of technology, West Virginia University (WVU) recognizes the transformative potential of generative AI tools like ChatGPT, Bard, DALL-E, and Otter.AI; however, it is of utmost importance to use these tools in a manner that is beneficial, ethical, and aligned with the core values and regulatory frameworks of WVU. Therefore, the Office of the Provost, General Counsel, and Information Technology Services provides the following guidance on how these tools may be used for administrative purposes at WVU.

These guidelines apply to WVU faculty, staff, and students when engaging in administrative activities and complement existing policies related to the use of technology at WVU. They represent WVU’s commitment to responsibly integrating generative AI into our administrative functions. They are dynamic and will evolve alongside technological advancements, ensuring that WVU remains at the forefront of ethical and effective AI application. If you are unsure about something in these guidelines or have suggested enhancements, please contact Information Security Services at infosec@mail.wvu.edu.

Core Principles

It is important to always keep these core principles in mind when using any technology at WVU, including generative AI. These core principles align AI use with the ethical standards and integrity values central to WVU and further supports the Acceptable Use of Data and Technology Resources Policy which identifies the acceptable and unacceptable uses of data and technology at WVU.

  • Understand capabilities and limitations. Understanding the capabilities and limitations of the generative AI tool before using it is imperative. Generative AI tools use machine learning models trained on massive pools of information to learn patterns from data to create novel content like text, images, audio, or video in response to a prompt. Unlike internet search engines, generative AI tools do not use algorithms to locate and curate existing sources, and instead they create new content by predicting what word, sound, or pixel would come next in a pattern. AI-generated content can be inaccurate, misleading, entirely fabricated, or may contain copyrighted material. Review your AI-generated content for inaccuracies before use.
  • Employ Trust and Transparency. Ensure clarity and openness when employing AI, particularly in areas affecting decision-making or policy development. Always ask yourself if a reasonable person would expect to know that you used generative AI to create the product and explain how you used AI.
  • Be accurate and inclusive. Ensure that your use of AI systems will not harm another individual or WVU. Prioritize outputs that are universally accessible and inclusive checking all data generated for both inherent bias and accuracy before sharing any products.
  • Ensure Data privacy and security. While there are many chances to experiment and innovate using these tools, at present WVU does not have an enterprise contract or agreement with any AI provider, meaning standardized WVU security and privacy provisions are not present for this technology. Never put personally identifiable information (PII), confidential, or other sensitive data into a generative AI tool unless you have been explicitly approved to do so through the IT Purchase Request process. The WVU Information Privacy Policy identifies that WVU will never distribute or share PII it has collected unless it has a contractual agreement that the data will be secured and destroyed when no longer required. Again, generative AI tools cannot guarantee these security requirements. Use the IT Purchase Request process to ensure the tool you want to use is secure, including “free” software. Report any AI-related security or privacy events to Information Security Services via the WVU Incident Report Form.

Be adaptable. Embrace continuous learning about AI advancements and adapt to adjustments to these guidelines as needed. Encourage open dialogue and suggestions for improvements. Regularly revisit and update these guidelines to stay current with AI developments and institutional needs.

Use of AI Tools

Integrating generative AI tools into your work can help you and your team be more efficient and effective in getting work done but always remember that AI should only be used as a tool to aid you and your team, not replace human expertise or judgement.

The following section covers the potential uses of AI/ML. It includes examples of when it should and should not be used.
Administrative Use of AI/ML: Permitted uses: Prohibited uses:
Communication Enhancement
  • Refining communication messages and creating presentations
  • Analyzing communication patterns for effectiveness.
  • Example: Drafting generic email campaigns and suggesting language improvements.
  • Handling communications that include personally identifiable information (PII) due to privacy concerns.
  • Example: Personalizing emails with recipient-specific PII.
Analytical and Reporting Tools
  • Processing large data sets for extracting insights and trends.
  • Enhancing data analysis efficiency and accuracy.
  • Example: Analyzing anonymized customer behavior patterns.
  • Analyzing identifiable personal information, especially sensitive data.
  • Example: Processing data that could reveal individual customer identities.
Document Management
  • Assisting in the creation and organization of documents.
  • Example: Generating inclusive job descriptions.
  • Situations where the authenticity and originality of the document are critical.
  • Example: Legal documents requiring nuanced human understanding.
Customer Service Automation
  • Implementing chatbots for routine customer inquiries.
  • Example: Retail website chatbots for instant responses.
  • Handling complex customer service issues that require empathy and deep understanding.
  • Example: Resolving sensitive customer complaints.
Predictive Maintenance
  • Predicting equipment failures and scheduling maintenance.
  • Example: AI analyzing machine data to predict maintenance needs.
  • Situations where incorrect predictions could lead to significant safety risks.
  • Example: Critical safety systems where human oversight is essential.
Personalized Marketing
  • Customizing marketing efforts based on customer data analysis.
  • Example: E-commerce platforms recommending products based on user history.
  • Marketing strategies requiring deep understanding of complex human behaviors and ethics.
  • Example: Personalized advertising that could infringe on privacy or ethics.
Budget Data Handling
  • Given the current lack of privacy typically involved with the use of AI/ML tools, there is not a current pathway for handling sensitive budget data.
  • Direct handling of sensitive financial data, including budget planning and allocation.
  • Example: Making decisions on budget allocations or financial planning.
Meeting Transcription
  • Transcribing meetings, lectures, and other spoken content for record-keeping and accessibility.
  • Example: Employing AI to provide real-time transcription of business meetings or academic lectures, making them accessible to a wider audience.

NOTE: You must inform everyone in the meeting that you will be using AI for these activities at the beginning of the meeting so that they can consent to its use and collection of their information. If someone does not agree with use of the tool, do not use it.

  • Transcribing meetings where the content is highly confidential or sensitive, and the risk of data breaches or inaccuracies is significant.
  • Example: Using AI transcription in closed-door, high-level strategic meetings or in contexts involving sensitive personal information, where human discretion is paramount.
Audio and Video Creation
  • Creating personalized audio or video content based on user preferences and behavior.
  • Example: Streaming platforms generating custom playlists or video summaries for individual users.
  • Enhancing the accessibility of content through automatic dubbing and subtitling in multiple languages.
  • Example: Automatically translating and dubbing a lecture into several languages to reach a global audience.
  • Generating deepfake audio or video that could be used to impersonate individuals or spread misinformation.
  • Example: Creating realistic video clips of public figures saying or doing things they never actually did.
  • Producing content without proper consideration for copyright, ethical implications, or the potential for harmful misuse.
  • Example: Using AI to generate music or videos that closely mimic copyrighted material, or creating content that could be harmful or misleading.
Coding Assistance/Generation
  • Increase the efficiency of developers and reduce the time for development.
  • Example: Use AI to generate the boilerplate code for an API.
  • Producing code without considering if it truly meets the objective or if it is secure.
  • Example: Generating code to identify student enrollment numbers that is relying on obsolete data or is built on other people’s mistakes.

Connect With Us

Service Desk Hours and Contact

Service Desk Hours

Monday – Friday: 7:30 a.m. – 8 p.m.
Saturday and Sunday: Noon – 8 p.m.

Closed on official University holidays.

Contact Us

Information Technology Services
One Waterfront Place
Morgantown, WV 26506

(304) 293-4444 | 1 (877) 327-9260
ITSHelp@mail.wvu.edu

Get Help

Maintenance Schedule

To function effectively and securely, applications and the systems that support them must undergo regularly planned maintenance and updates.

See Schedule