Integrity and AI
At Drexel I'm a member of a recently assembled committee called the Provost's Office Working Group on Policy for Academic Integrity with Artificial Intelligence. Like many (most? all?) colleges and universities, we're trying to create academic integrity guidelines (or guardrails) for our faculty and students.
One aspect of our working group's efforts that I like is that in
the background of our statement, the initial item describes the value of AI in
education. We're not treating these tools first and foremost like a problem: Instead, we are treating them like tools vast
potential, but potential we must understand within a sphere of sensible,
ethical guidelines.
This is crucial, because in line with our local work, statements and guiding principles are being posted everywhere. Here are a few samples.
Even early on, people were developing language, such as the course policy wording by Ethan and Lilach Mollick in “Why All Our Classes Suddenly Became AI Classes” (1). They provide sample course policies:
I
expect you to use AI (e.g., ChatGPT and image generation tools) in this
class. In fact, some assignments will require it. Learning to use AI is
an emerging skill and I provide tutorials on how to use them. I am happy to meet and help you with these tools during office hours or after class.
Be aware of the limits of ChatGPT, such as the following:
- If you provide minimum-effort prompts, you will get low-quality results. You will need to refine your prompts in order to get good outcomes. This will take work.
- Don’t trust anything it says. If it gives you a number or fact, assume it is wrong unless you either know the answer or can check with another source. You will be responsible for any errors or omissions provided by the tool. It works best for topics you understand.
- AI is a tool, but one that you need to acknowledge using. Please include a paragraph at the end of any assignment that uses AI explaining what you used the AI for and what prompts you used to get the results. Failure to do so is in violation of academic honesty policies. Be thoughtful about when this tool is useful. Don’t use it if it isn’t appropriate for the case or circumstance.”
The February 2023 Institute for Learning and Teaching at Colorado State describes a "multiple possibilities" approach to syllabus statements with AI, including examples from syllabi:
- The Prohibitive Statement states that any AI "use on graded work/ work for credit will be considered a violation of the academic misconduct policy." The Prohibitive Statement is appropriate "for a class in which the course outcomes have been compromised by use of the technology," such as a writing course.
- The Use-With-Permission Statement "makes it clear that only certain uses will be acceptable" to the instructor.
- The Abdication Statement includes an example that states, "From his point forward, I will assume that all written work has been co-authored or entirely written by ChatGPT." In this case, instructors will grade such writing "normally" and the "grade will be a reflection of your ability to harness these new technologies as you prepare for your future in a workforce that will increasingly require your proficiency with AI-assisted work."
Notre Dame's May 2023 statement is short, strongly worded, and straightforward: "When students use generative AI (such as ChatGPT) to replace the rigorous demands of and personal engagement with their coursework, it runs counter to the educational mission of the University and undermines the heart of education itself."
Director of the University of Missouri's Office of Academic Integrity Ben Trachtenberg took an FAQ-type approach to a March 2023 statement, answering questions such as
- May Students Use ChatGPT and Similar Tools for their Academic Work?
- Can Professors Tell if Students Use ChatGPT?
- What’s the Bottom Line?
Answering the last question, he indicates students should seek to uphold Mizzou's values as described in the Mizzou Honor Pledge: "If you think your instructor would object to your using ChatGPT (or a similar tool) in a certain way, you should not do it. If you are unsure, you should ask your instructor first."
Interestingly, the resource site for Royal Military College of Canada (RMC)/Canadian Forces College (CFC) focuses on syllabus policies about the low quality of detection tools: "Until formal direction is received from the Department of National Defence, it is suggested that, for privacy, ethical, and security reasons, no student, faculty, or staff should be required or expected to create or pay for personal accounts in commercial generative systems such as ChatGPT."
In Listserv conversations I have followed within Drexel and beyond, many programs and individual faculty have tried to define the way they will judge AI use in their own classrooms. In addition, members of the academic community are increasingly concerned that authors will submit AI-composed manuscripts.
As usual, it's a good idea to ask colleagues what they are doing: Our teaching communities often contain deep wisdom, and we should continue to take collaborative, local approaches to creating sensible policies about the use of AI/ML. The material above represents just a few of these efforts.
Notes:
1) From February 9, 2023 Harvard Business Publishing Education.
Labels: AI, AI guidelines, integrity, plagiarism
0 Comments:
Post a Comment
<< Home