It’s important to understand common risks associated with AI and cloud-based services.
-
Data Risks. The protection of sensitive or protected data requires strict controls on access by cloud and AI-based services. Data from devices (e.g. computer, phone, tablet, etc.) or accessed online content may be transferred to external servers for processing. Without established enterprise agreements, there is risk of sharing proprietary information, intellectual property (IP), and protected private data including Personally identifiable information (PII).
-
Compliance Risks. Maintaining and protecting student records and personally identifiable information (PII) is a Federal FERPA (Family Educational Rights and Privacy Act) compliance requirement.
-
Bias Risks. Systematic bias may be introduced by the AI model design (algorithmic bias) and/or the data over time (data bias). Poor data hygiene can limit the value of AI tools and lead to widespread discriminatory outcomes.
When using AI tools, consider these recommended best practices.
-
Use only company-approved AI tools and platforms.
-
Cloud-based services must be reviewed and approved by TS prior to use.
-
AI-based services must be reviewed and approved by TS prior to use.
-
Unauthorized access to sensitive data by AI services is strictly prohibited.
-
Human Resources (HR) and talent-related documentation is restricted and should not be added to public AI containers. E.g. Annual reviews, corrective actions, performance improvement plans (PIP), contracts, etc.
-
Be skeptical of and audit AI results to verify accuracy and reliability.
-
AI may assist but is not a replacement for human judgment, expertise, and creativity.
-
AI detection software is not reliable due to high false positive and negative rates.
Report any AI-related concerns or issues to the TS team for review. Email details to tech@gordonconwell.edu for support.
IT-approved tools and services may be found via the Gordon-Conwell Support Portal help articles. To request a new tool or service, please send details including anticipated users and use case to tech@gordonconwell.edu. A ticket will be opened and assigned to a support representative.
GCTS Technology Services (TS) has formed a multi-disciplinary AI Technology Governance team to guide best practices and support for enterprise AI tools.