Pairing Clemson data and tools for responsible AI use

student thinking with graphical computer images overlayed behind her head student thinking with graphical computer images overlayed behind her head
Current News

Artificial intelligence tools can help Clemson faculty and staff work more efficiently, but it is important to ensure they are used responsibly and safely. When getting started, the first question is not simply which tool to use, but whether the data is appropriate for that tool in the first place. Responsible AI use requires careful attention to ensure data protection, privacy and security.

Data classification at Clemson

Clemson’s data classification policy provides a framework for identifying and protecting University information based on its sensitivity, risk and applicable handling requirements. The policy is intended to help faculty, staff and other data users make informed decisions about how information is stored, shared, accessed and safeguarded across systems and workflows. In the context of AI, that framework is especially important because it helps determine whether a tool is appropriate for the information involved and what level of review or protection may be required.

Clemson uses four data classification categories:

  • Public
    • Information developed and intended for public disclosure.
    • Examples: Public-facing websites and published research data.
  • Internal Use
    • Information used in daily operations, not generally available to the public and would create only minimal adverse impact if compromised.
    • Examples: Non-public University policies, contracts, reports, training materials and unpublished research data distributed only within Clemson.
  • Confidential
    • Sensitive information protected by University policy, procedure or contract.
    • Examples: HR records, personally identifiable information, University infrastructure information and FERPA-protected data.
  • Restricted
    • Highly sensitive and protected by law, regulation or contractual obligation, with significant consequences if exposed.
    • Examples: Controlled unclassified information, protected health information governed by HIPAA, export-controlled data, Social Security numbers and banking information.

    When incorporating AI into your workflow, start with the data, determine its classification and then confirm which tools, if any, are appropriate for that use. Data classification matters because approval of a main platform does not automatically extend to every add-on or connector attached to it.

    Clemsons approved institutional AI tools 

    Clemson provides access to a curated set of AI tools that align with the University’s data governance, privacy and security standards. These institutionally-licensed tools operate within Clemson’s secure technology environment and are covered by enterprise-level privacy and security controls.

    That distinction matters when faculty and staff are deciding which AI tool to use. For example, Clemson’s access to ChatGPT Edu ensures that University information entered into the system is secure and is not used to train other AI systems. Clemson’s approved institutional AI tools differfrom public or unvetted tools because they are covered by contract and configured for Clemson’s environment.

    Even so, users should still follow University guidance on data classification and approved use, since not every institutional tool is approved for every type of data or every kind of integration.

    Use the AI Guidelines data and tool matrix as a high-level guide to confirm whether the proposed use is permitted, requires prior approval or is prohibited.

    Practical questions to ask 

    Before using AI for any University work, faculty and staff should ask a few basic questions:

    • Is the data Public, Internal Use, Confidential or Restricted?
    • Is the AI tool Clemson-approved for that classification and use case?
    • Does the workflow involve any add-on, plugin or other connector that changes where data goes?
    • Can the task be completed with less data, fewer identifiers or more generalized information?

    If the answer to any of those questions is unclear, do not enter the data until you have confirmed the appropriate path forward. Faculty and staff who are unsure about data classification, data sharing or tool approval should consult their area’s IT consultant or email ithelp@clemson.edu.

    Faculty and staff interested in a specific AI tool or connector should submit an assessment request through CheckIT to have the service evaluated for University use. This review helps determine whether the tool aligns with Clemson’s security, privacy, accessibility and data-governance standards before it is used with University information. When the request involves research, the same submission will also connect to the Division of Research approval process to ensure the proposed use aligns with project requirements and responsible-use expectations. Using an approved tool within its published data classifications and permitted use does not require additional CheckIT review. That includes contractually protected enterprise AI tools, currently ChatGPT Edu and Microsoft Copilot, as well as the use of public data in public AI tools.

    Clemson’s human-centered approach to AI depends not only on access and innovation, but on sound judgment about data. The goal is not to slow useful work unnecessarily, but to ensure that Clemson faculty and staff use AI in ways that are secure, compliant and appropriate for the information involved.