Body
As AI tools become increasingly integrated into our daily lives, we want to ensure that our community uses these powerful technologies safely and responsibly. Below are guidelines for using AI tools when working with Connecticut College information.
Approved AI Tools for Work Use
When using AI tools with any work-related information, please use only the following approved platforms:
-
NotebookLM: using only your Connecticut College Google Workspace account
-
Google Gemini: through your Connecticut College Google Workspace account
-
Zoom AI Assistant: built into our Zoom environment, ensure you are logged in with your Connecticut College account
-
BoodleBox: our institutional AI platform (with access to ChatGPT, Claude, Perplexity and other premium tools), ensure your Connecticut College email address is connected to the AI@Conn team
These tools have been vetted for enterprise-level security and data protection, compliance with educational privacy regulations, clear data governance policies that protect institutional information, and institutional controls over data retention and usage. Avoid entering Connecticut College information into non-institutional versions of ChatGPT, Claude, Perplexity, or other unapproved AI platforms.
Notetaking Bots in Zoom
We have disabled access for third-party notetaking bots such as Read.ai, Fireflies.ai, and Otter.ai in our Zoom environment. Zoom offers a robust, integrated AI assistant that provides similar functionality while keeping your data secure within our institutional environment. Read more about how to use the Zoom AI Companion.
Please note that while we have blocked these third-party notetaking tools from meetings hosted by Connecticut College’s Zoom domain, we are not able to control meetings hosted externally by vendors, consultants or colleagues at other institutions.
Agentic AI Tools - Prohibited Without L&IT Review
We are also prohibiting the installation and use of agentic AI tools that have not been reviewed and approved by Library & Information Technology (L&IT). Agentic AI refers to autonomous AI systems that can independently set goals, make decisions, take actions, and interact with external systems without continuous human oversight. Examples of prohibited agentic AI tools include: OpenAI Atlas or Frontier, Perplexity Comet, or Moltbot/OpenClaw (or similar open access agents). If you have installed any of these programs on your college owned device you should immediately remove it and coordinate a security review of your system with the IT Service desk.
Why These Restrictions?
While many AI tools offer appealing features, unapproved platforms can create significant risks:
-
Privacy and Security: External bots record and process institutional data on third-party servers, creating potential data security risks and compliance issues with student privacy regulations (FERPA). We are required by law to protect student and employee information (FERPA and HIPAA) and maintain compliance with other data protection regulations. Unapproved AI tools may not meet these legal standards. Further, personal and professional information can be inadvertently shared putting the user and their organization’s data at risk.
-
Autonomous Actions Without Oversight: Agentic AI tools can independently access, modify, or share sensitive data without human review, including wallets and saved passwords. These systems may interact with college databases, CRMSs, and file systems in unpredictable ways.
-
Consent and Legal Compliance: Not all meeting participants may be aware that an external bot is recording, which can create uncomfortable situations and potential legal concerns. Connecticut is an “All-Party Consent” state for recording conversations, so you have the right to say no to third-party tools within any meeting.
-
Data Governance and Integration Risks: We have limited control over how third-party services store, use, or retain meeting data. Agentic AI systems can cause system disruptions, data corruption, or security vulnerabilities.
-
Inadvertent Data Exposure: Information entered into unapproved AI platforms can inadvertently expose confidential student records, personnel matters, or other sensitive institutional data.
-
Training Data Concerns: Many free AI services use submitted content to train their models, meaning your work information could become part of their permanent dataset and potentially accessible to others.