Generative AI Information for Faculty
Learn how to use AI to support your teaching and research while staying true to your own ideas and expertise.
What is Generative AI?
Generative Artificial Intelligence (GenAI) refers to tools that can create new content such as text, images, code, audio or video, based on prompts or questions. Common examples include ChatGPT, Microsoft Copilot, Google Gemini and discipline‑specific AI tools.
At USD, GenAI is understood as a tool that can support teaching, learning, research and professional work when used responsibly and transparently. These tools are best positioned as supplements to human expertise and judgment, not replacements for faculty knowledge, disciplinary standards or pedagogical intent.
GenAI is part of the professional landscape our graduates will encounter. Thoughtful faculty engagement helps students learn how to use these tools critically, ethically and effectively. For additional support in using GenAI in courses or with students, please reach out to the Center for Teaching and Learning.
Responsible and Ethical Use for Faculty
Possible Uses of GenAI in Teaching and Academic Work
Academic Integrity and AI
Misrepresenting GenAI‑generated work as human‑generated by students or faculty may raise academic misconduct under USD policy or professional ethics concerns under discipline-specific guidelines.
Faculty are encouraged to address GenAI explicitly as part of academic integrity conversations, rather than relying solely on detection tools, which are unreliable and raise privacy concerns. Software that provides “scores” based on matching student submissions or monitoring student behavior (e.g., Turnitin and Respondus Monitor), should be verified and human reviewed before decisions about academic misconduct are made.
Recognizing AI Content
No single indicator proves AI use. However, multiple patterns together may warrant a conversation with the student.
Overly Polished but Shallow Responses
- Fluent writing with limited depth, original insight or disciplinary sepecificity.
- Generalized explanations that avoid taking a position or making a claim.
Generic Structure and Language
- Repetitive paragraph formats (e.g., perfectly balanced introductions and conclusions).
- Formatting commonly found in GenAI output (e.g., lines between sections, bolded text throughout, etc.).
Lack of Engagement with Course Materials
- Missing, incorrect or fabricated citations.
- References to concepts not covered in the course, or failure to reference required readings.
Confident Inaccuracies
- Statements presented with certainty that are factually incorrect or oversimplified.
- Errors that reflect common AI misunderstandings rather than novice human reasoning.
When GenAI Use is Suspected, Use Process, Not Policing
When questions arise, the recommended next step is engagement and education, not accusation.
Productive approaches include:
- Asking students to explain or defend their work orally.
- Requesting drafts, annotations, or reflections on decision-making.
- Having students identify challenges they faced and how they addressed them.
These approaches support learning and academic integrity without relying on detection tools.
Focus on Alignment with Learning Goals
Faculty are encouraged to frame concerns around learning outcomes, not tool use.
Useful questions include:
- Does this work demonstrate the skills the assignment was designed to assess?
- Can the student explain their reasoning and choices?
- Is the process as visible as the final product?
Assignments that emphasize process, iteration and reflection naturally reduce misuse and make student thinking more visible.
A Note on Embedded AI Tools
Many commonly used platforms now include AI features by default, including word processors, learning management systems and design tools. Faculty should try to remain aware of these features and help students understand when AI is involved.
A tool may be using AI if it:
- Automatically rewrites, summarizes or expands content
- Suggests sentences, explanations or solutions
- Generates content when clicking options like "rewrite," "improve," or "summarize"
- Produces polished responses without showing intermediate reasoning