Academic Policy on Artificial Intelligence
Introduction
This academic policy outlines the ethical and responsible use of Artificial Intelligence (AI) at 51反差婊 University. In alignment with 51反差婊 University's Christian mission, this policy is rooted in our commitment to fostering a learning environment that encourages innovation while upholding academic integrity, protecting data, and promoting human flourishing. As an institution dedicated to preparing students for lives of purpose, service, and leadership, we believe that the responsible use of AI is essential for equipping our community to engage with a rapidly changing world while remaining steadfast in our values.
Table of Contents
Guiding Principles
The use of AI at 51反差婊 University is guided by the following principles, which reflect our Christian mission and values:
- Human Flourishing: As a Christian community, we believe that every individual is created in the image of God. AI tools should be used to enhance human creativity, critical thinking, and intellectual growth, not to replace them. Our goal is to equip students and faculty with the skills to use AI as a tool for deeper understanding and service, in line with our mission to serve God and others.
- Academic Integrity: As we pursue knowledge with integrity, all members of the University community are expected to maintain the highest standards of academic honesty. The use of AI must be transparent and properly attributed, as outlined in this policy, to ensure that the work presented is an honest reflection of one's own intellectual effort.
- Truth and Accuracy: As an institution committed to truth, we recognize that AI models can "hallucinate" or provide inaccurate information. Users are responsible for verifying all AI-generated content and ensuring its factual accuracy and alignment with scholarly standards, always seeking to discern truth from falsehood.
- Stewardship and Responsibility: We are called to be good stewards of our resources and relationships. This includes
being mindful of the ethical implications of AI, such as bias, copyright, and data
privacy, and using these tools in a way that respects the dignity of every person
and the integrity of creation.
Teaching and Learning
This section outlines the use of AI in teaching and learning settings.
- Syllabi: Faculty members have the authority to set specific AI policies for their individual courses, which must be clearly communicated in their syllabi. Course policies vis-脿-vis AI should align with program and course learning outcomes. Some options for faculty to consider when communicating with students include: permitted uses, prohibited uses, expectations regarding citation and transparency, and including AI training opportunities when appropriate. The major style guides offer resources on using generative AI in academic writing, including how to cite the use of AI tools (see ). The Office of the Provost provides guidelines for syllabus statements regarding AI use. Given the prevalence of AI technology, faculty are responsible for offering guidance on AI use in their courses. To ensure student accountability and clear expectations, AI guidance should be articulated at the individual assignment level. Regarding AI course and assignment integration, faculty may choose to adopt a scale similar to the following when creating assignments.
| Level | Policy | Description |
|---|---|---|
| Level 1 | Prohibited | No AI tools may be used for any part of the assignment. |
| Level 2 | Assisted | AI may be used for brainstorming or checking grammar, but not for drafting. |
| Level 3 | Integrated | AI use is encouraged or required; transparency/citation is mandatory. |
- Academic Integrity: Faculty and schools should consider the impact of AI on existing academic integrity policies and enforcement, noting that AI detection tools may be inaccurate or generate false positives. Faculty are encouraged to use specific AI Use Levels (e.g., Level 1-3) within individual assignments to create a clear record of expectations. Enforcement of academic integrity regarding AI is most effective when expectations are clearly defined at the assignment level. Additionally, faculty or schools may adopt policies that restrict the use of AI for grading student work or to ensure that human faculty expertise and discernment are employed in the evaluation of student learning.
- Students: Student AI use guidelines are to be determined by professors for their respective
courses. This means the extent and nature to which students are permitted to use AI
may differ from course to course and from assignment to assignment. Professors are
free to prohibit, allow with limitations, or encourage student AI use. Students are
responsible for fully communicating with their professors to understand their permitted
use of AI as a learning tool and as an aid in completing coursework requirements.
Using AI tools in unauthorized ways, and without the consent of the professor, may
result in academic disciplinary action. If you have any doubt about an assignment,
be sure to seek clarification and guidance from your professor. Although a professor
may allow the use of AI tools for student assignments, AI contributions must be declared
and cited in most cases. Failure to disclose AI use or cite contributions may be considered
plagiarism and may result in academic disciplinary action. If permitted to use AI,
students must abide by the generally accepted AI use safeguards:
- Treat AI tools as public forums. Do not enter confidential or restricted information.
- Carefully review AI outputs for errors, hallucinations, plagiarism, and legally-protected intellectual property.
- Everyone in the academic community is individually responsible for the accuracy and
integrity of their content.
Research
This section applies to all University-affiliated research, including faculty, staff, and student projects. This includes, but is not limited to, grant writing for internal and external funding, dissemination of research findings in presentations and publications, and the teaching of research and scholarship methods in courses.
- Ethical Review: Researchers using AI in a way that involves human subjects or sensitive data must comply with the University's Institutional Review Board (IRB) policies.
- Data Security and Privacy: Researchers must adhere to all University policies regarding , privacy, and security. Confidential, restricted, or proprietary information, including student or personnel data, should not be entered into public AI tools. Enterprise-grade versions of public tools, such as 51反差婊鈥檚 Google Gemini application, are acceptable for certain data types. If using 51反差婊's Google Workspace, you may use Public data (no restrictions) and Confidential data (password-protected and only shared with authorized parties), but never use Restricted data in any Google Workspace tool, including Gemini.
- Attribution and Integrity: Researchers must accurately report the use of AI in their work consistent with the standards of the relevant discipline, journal, funding agency, or publisher. Plagiarism and misrepresentation of AI contributions are serious violations of research integrity.
- Bias Mitigation: Researchers are responsible for acknowledging and accounting for potential biases in the AI models and data sets they use. They must be transparent about the limitations and potential biases of their AI-powered research.
- Confidentiality in Grant Applications and Peer Review: Faculty serving as reviewers for manuscripts, grant proposals, or other unpublished
works must maintain strict confidentiality. Uploading such materials into public AI
tools is prohibited, as it constitutes an unauthorized disclosure of intellectual
property and violates author privacy. Federal agencies, including the and the , strictly prohibit the use of AI in the peer-review process and require explicit
disclosures regarding the use of AI in the development of grant applications.
Administrative Use
This section applies to all University faculty and staff using AI in their daily professional work.
- Data Protection: University personnel must never input confidential, proprietary, or personally identifiable information (PII), such as student records (FERPA-protected), financial data, or personnel information, into public AI tools. Enterprise-grade versions of public tools, such as 51反差婊鈥檚 Google Gemini application, are acceptable for certain data types. Please reference the page, , and .
- Security and Compliance: All AI applications must comply with the University's data security and privacy policies. Any third-party AI software or service must undergo a security review by the University鈥檚 IT department and be covered by a formal contract that protects University data. Please visit the page, as well as review the . Academic or business groups that integrate AI into University processes should consult with the Information Technology team to ensure data security and compliance.
- Accountability: Users of AI tools are responsible for the content they generate and the decisions they make based on AI outputs. AI should not be the decision-maker in human evaluations, including performance review, teaching, grading, and the creation and delivery of course content.
- Training: The University provides training on the secure and ethical use of AI tools for administrative
purposes, emphasizing data privacy and risk management.
Security
This section outlines the security implications and requirements for AI use.
- AI-Enhanced Security and Threat Detection: The University may use AI-powered tools for campus security and cybersecurity purposes, such as detecting malware, identifying phishing attempts, and monitoring network traffic for anomalies.
- Vulnerability Reporting: All members of the University community are encouraged to report any potential security vulnerabilities related to AI tools to the .
- Prohibited AI Use: The use of AI tools for any illegal purpose or in violation of University policy
is strictly prohibited.
Enforcement and Policy Review
Violations of this policy will be addressed in accordance with the University's existing
Student Code of Conduct, school-specific faculty handbooks, the , the , and the . The Office of the Provost will regularly review and update this policy to keep current
with the rapidly evolving field of AI.
Definitions
- AI Hallucination:
- A phenomenon where a Generative AI model perceives patterns or objects that are non-existent or imperceptible to human observers, resulting in the generation of outputs that are factually incorrect, nonsensical, or disconnected from reality.
- Artificial Intelligence (AI):
- A branch of computer science involving the development of systems capable of performing tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.
- Confidential Information:
- University information that is used to conduct official University business with limited internal distribution; contains proprietary information, or student record information covered by FERPA. See Personally Identifiable Information (PII) and the .
- Enterprise/Private AI Tools:
- AI applications or platforms that are governed by a formal contract between the University and the provider. These tools include specific protections for data privacy, security, and intellectual property, ensuring that University data is not used to train the provider's public models. See Reviewed AI Tools.
- Generative AI (GenAI):
- A category of AI systems capable of creating new content鈥攊ncluding text, images, audio, video, and computer code鈥攂ased on the data on which they were trained. Examples include, but are not limited to, Large Language Models (LLMs) such as ChatGPT, Gemini, and Claude, as well as image generators such as Midjourney or DALL-E.
- Human-in-the-Loop (HITL):
- A requirement that a human review, verify, and take responsibility for any output generated by an AI before it is finalized, published, or used to make a significant decision.
- Personally Identifiable Information (PII):
- Any data that could potentially identify a specific individual. In a university context, this includes FERPA-protected student records, health information (HIPAA), and protected personnel or financial data.
- Prompt Engineering:
- The process of refining and optimizing the inputs (prompts) provided to an AI model to achieve a specific or improved output.
- Public AI Tools:
- AI services that are available to the general public (often via a free or individual paid tier), where data entered may be used by the provider to train future models. These tools generally do not meet the University鈥檚 requirements for protecting confidential data.
- Public Information:
- Information that is not classified as restricted or confidential. See the .
- Restricted Information:
- University information that includes authentication or password data; makes the University liable for damages due to unauthorized disclosure under laws, regulations, or contracts; or pertains to HIPAA-protected information. See the .
Document History
This policy was initially drafted by the Office of the Provost, Fall 2025.
Approval Process:
- Dean鈥檚 Council, February 2026
- UAC, March 2026
鈫&苍产蝉辫;Back to Policies and Documents