Thursday, November 30, 2023

Generative AI and integrity concerns: More discussion board support

As I wrote a few months ago, the hand-wringing about generative AI and academic integrity that has been taking place in academia caused teacher me to return to a fundamental educational technology that has served me well: Message boards.

The writing assignment submission approach I'll briefly describe has lots of pedagogical upside, but I'm talking about it here in the context of AI-driven concerns about academic integrity.

For my current course, students submit four short writing projects in addition to many low-stakes assignments. I call these writing projects Exploration dialogues; you'll see why in a moment.

They submit these projects in two ways.

One, they submit to a Turnitin dropbox in our Blackboard (Bb) Learn course, a method I use not for plagiarism policing so much but because Turnitin has an effective audio commenting function that is great for response and is seamlessly integrated in Bb Learn. Although I don't require submitted rough drafts of every project, they also must submit rough draft material, even photos of notes from class. (The photos, by the way, are great and super fascinating.)

But they also post their final drafts to a Bb Learn Discussion, and I provide them there with these instructions:

Please cut-and-paste only your final draft of your Exploration dialogue Project 2. Do not use an attachment.

On this thread, read two of your classmates’ projects and respond to them. From the Syllabus:

On a Discussion board for each project, you will read two of your classmates’ projects and write short (at least 50 words) but meaningful Reactions. There is no fixed framework for your Reactions, but asking questions and seeking points of clarification are ideal. These Reactions may also allow you to connect the project with your own work in the course or perhaps your own experiences. The project author then will reply to these Reactions, either individually or collectively. 

Feel free to ask me questions!

Philosophically, this approach is nothing whiz bang new. Teachers do things like this. These straightforward threads have yielded interesting results in terms of student dialogue and learning for this course. I could talk more about that, as I mentioned.

But I believe this approach provides a constructive approach to academic integrity in a time when teachers are not sure what to do. Can students still misrepresent their writing, generative AI or otherwise? Of course! But there are implications if they submit a paper that is not theirs.

For one, they may reveal weak mastery of the content, which could become apparent as these message board conversations unfold.

More importantly, they are doubling down on integrity issues by writing about something that isn't theirs! I mean, they have to really have ice water in their veins to engage in a conversation about writing that is not theirs, fooling not just the professor but their own classmates. Could it happen? Does it happen? Of course it could! But shoring up against such behavior through an open course writing environment like this seems less like cheating vigilance and more like using good pedagogy for a collectively good end.

Labels: , , ,

Saturday, September 30, 2023

Screens, attention, and mental health

I want my students to learn--of course!--but I mainly want them to be fulfilled and happy people. Writing is a path to those broader goals. I don't want fundamental aspects of my teaching to get in the way. I follow a teacherly version of the Hippocratic Oath. 

Lately, I've been thinking about how such general goals of student wellness might clash with online learning. Of course, as if this nearly 20-year-old blog weren't evidence enough, you should know that I am a firm advocate of the value of online learning, especially online writing (and literacy) instruction; simply, OWI gives students access to learning in ways they otherwise wouldn't. It extends the classroom in writing-rich ways. It's great.

But there are costs with any benefit, and one in particular resonated with me after I recently finished the book Stolen Focus by Johann Hari. This fascinating book looks at "why we seem to have lost our sense of focus, and how we can get it back" (p. 9). Using a mix of research and anecdote, Hari unspools a series of causes; unsurprisingly several of them connect with our life around screens (although he discusses other strong trends and causes too).

Consider that students who sign up for our online courses do not receive an accompanying app that keeps them focused when they are "in" class, particularly asynchronous class. We can't control behaviors such as students distractedly scrolling the web while "in class" nor do we have control over if our class work doesn't push them over daily, weekly, monthly screen time limits.

Should we be looking into this more, and, if so, how? How might we approach conversations about limiting screen time in environments fundamentally mediated by screens? After all, many screens  have inherent characteristics, as Maida Lynn Chen in The Conversation pointed out in discussing insomnia, depression, and screens: "... the light emitted from most handheld devices, even with a night filter, a blue light filter, or both, is enough to decrease levels of melatonin, the primary hormone that signals the onset of sleep."

Intriguing research and pedagogical avenues seem open here for teachers to explore and critique the structure of their very courses.

We want to learn more, as students' education could be intertwined with tools that may have deleterious effects on their attention spans and perhaps their overall mental health. This, I suppose, is akin to ergonomics studies showing the effects of sitting in a chair all day at work or school and seeking solutions.

For this brief post, I did a cursory review of some research. The pandemic opened opportunities to study this area, for instance, the metastudy, "A systematic review of screen-time literature to inform educational policy and practice during COVID-19" in The International Journal of Educational Research Open by Siamack Zahedi, Rhea Jaffer, and Anuj Iyer, which looked at 52 studies but found them "too small," "inconclusive," or "critically underrepresented" and suggested, "These facts, along with the undeniable benefits of online learning in the absence of brick-and-mortar schooling and the ominous forecasts of learning loss caused by prolonged school closure, inform our recommendations for a more moderate policy and practical stance on restrictions...." Another is an Education Next article "Should We Limit 'Screen Time' in School?" by Daniel Scoggin and Tom Vander Ark; here is a link to a forum connected to that article.

Looking briefly at the available literature, I'm thinking about next steps, and, as I type away on the web interface of my computer (on a Saturday), I'm wondering if we need to develop tools--what would that even mean!?--to help our online students stay focused.

Labels: , , , ,

Monday, July 31, 2023

Integrity and AI

At Drexel I'm a member of a recently assembled committee called the Provost's Office Working Group on Policy for Academic Integrity with Artificial Intelligence. Like many (most? all?) colleges and universities, we're trying to create academic integrity guidelines (or guardrails) for our faculty and students.

One aspect of our working group's efforts that I like is that in the background of our statement, the initial item describes the value of AI in education. We're not treating these tools first and foremost like a problem: Instead, we are treating them like tools vast potential, but potential we must understand within a sphere of sensible, ethical guidelines.

This is crucial, because in line with our local work, statements and guiding principles are being posted everywhere. Here are a few samples.

Even early on, people were developing language, such as the course policy wording by Ethan and Lilach Mollick in “Why All Our Classes Suddenly Became AI Classes” (1). They provide sample course policies:

I expect you to use AI (e.g., ChatGPT and image generation tools) in this class. In fact, some assignments will require it. Learning to use AI is an emerging skill and I provide tutorials on how to use them. I am happy to meet and help you with these tools during office hours or after class.

Be aware of the limits of ChatGPT, such as the following:

  • If you provide minimum-effort prompts, you will get low-quality results. You will need to refine your prompts in order to get good outcomes. This will take work.
  • Don’t trust anything it says. If it gives you a number or fact, assume it is wrong unless you either know the answer or can check with another source. You will be responsible for any errors or omissions provided by the tool. It works best for topics you understand.
  • AI is a tool, but one that you need to acknowledge using. Please include a paragraph at the end of any assignment that uses AI explaining what you used the AI for and what prompts you used to get the results. Failure to do so is in violation of academic honesty policies. Be thoughtful about when this tool is useful. Don’t use it if it isn’t appropriate for the case or circumstance.”

The February 2023 Institute for Learning and Teaching at Colorado State describes a "multiple possibilities" approach to syllabus statements with AI, including examples from syllabi:

  • The Prohibitive Statement states that any AI "use on graded work/ work for credit will be considered a violation of the academic misconduct policy." The Prohibitive Statement is appropriate "for a class in which the course outcomes have been compromised by use of the technology," such as a writing course.
  • The Use-With-Permission Statement "makes it clear that only certain uses will be acceptable" to the instructor.
  • The Abdication Statement includes an example that states, "From his point forward, I will assume that all written work has been co-authored or entirely written by ChatGPT." In this case, instructors will grade such writing "normally" and the "grade will be a reflection of your ability to harness these new technologies as you prepare for your future in a workforce that will increasingly require your proficiency with AI-assisted work."

Notre Dame's May 2023 statement is short, strongly worded, and straightforward: "When students use generative AI (such as ChatGPT) to replace the rigorous demands of and personal engagement with their coursework, it runs counter to the educational mission of the University and undermines the heart of education itself."

Director of the University of Missouri's Office of Academic Integrity Ben Trachtenberg took an FAQ-type approach to a March 2023 statement, answering questions such as

  • May Students Use ChatGPT and Similar Tools for their Academic Work?
  • Can Professors Tell if Students Use ChatGPT?
  • What’s the Bottom Line?

Answering the last question, he indicates students should seek to uphold Mizzou's values as described in the Mizzou Honor Pledge: "If you think your instructor would object to your using ChatGPT (or a similar tool) in a certain way, you should not do it. If you are unsure, you should ask your instructor first."

Interestingly, the resource site for Royal Military College of Canada (RMC)/Canadian Forces College (CFC) focuses on syllabus policies about the low quality of detection tools: "Until formal direction is received from the Department of National Defence, it is suggested that, for privacy, ethical, and security reasons, no student, faculty, or staff should be required or expected to create or pay for personal accounts in commercial generative systems such as ChatGPT."

In Listserv conversations I have followed within Drexel and beyond, many programs and individual faculty have tried to define the way they will judge AI use in their own classrooms. In addition, members of the academic community are increasingly concerned that authors will submit AI-composed manuscripts

As usual, it's a good idea to ask colleagues what they are doing: Our teaching communities often contain deep wisdom, and we should continue to take collaborative, local approaches to creating sensible policies about the use of AI/ML. The material above represents just a few of these efforts.


1) From February 9, 2023 Harvard Business Publishing Education.

Labels: , , ,

Wednesday, May 31, 2023

Trying to keep up with natural language processing tools

Turnitin responded quickly. Faced with natural language processing tools, it deployed Turnitin's AI writing detection model. One day recently I opened Turnitin through my CMS, and there was the AI detector, ready to go.

This is likely a necessary response by the long-standing plagiarism guardian. "AI" has zoomed past cut-and-paste, and tools like Turnitin are trying to keep up.

As I've said on several occasions, I use Turnitin not so much for its plagiarism-catching talents but because it is integrated into Blackboard and has an easy-to-use commenting interface that includes the ability for a teacher to create a three-minute voice response. (I wish more teachers would use voice commenting features...)

We know its AI checker is not going to be perfect, but I had to relay this brief anecdote about just how far away it appears we are.

A sharp student in my class Language Puzzles and Word Games: Issues in Modern Grammar just submitted her second short paper.

Ah, relief: The AI checker came up 0%. (I expected nothing less, of course!)

A problem, though: Her project, in line with what we're doing in the course, was a comparison of a paper she wrote in another class and how ChatGPT would have responded to a similar prompt. In her paper, she used giant chunks of ChatGPT-generated text to illustrate grammar, usage, and style differences in her work from the natural language generator.

0%. Zero. Zilch. Nada. The AI detector didn't pick up any of it.

(The text she used was not bound by quotation marks, FYI.)

No one is expecting perfection, especially at this stage (and Turnitin includes serious qualifiers about the effectiveness of its tool), but we should remain very aware that digital watchdogs likely won't catch digital text. We have to do it ourselves.

We're in the early stages of a major shift in teaching, and we're collectively engaged with on-the-fly decisions about how we're adjusting to that change. I trust my students, but teachers will invite authorial misrepresentation if we don't develop pedagogies on the front end instead of relying on cross-our-fingers detection after the papers are in.

Labels: , , ,

Friday, March 31, 2023

AI apps and student writing worries?: An old, reliable OWT practice can help

I wish I was writing about this cool, high-tech (and maybe high-profit!) way of dealing with the AI chatbot app storm and the implications for students submitting inauthentic texts in your courses.

But I won't leave you in suspense: I'm not.

Instead, to help increase the chances that you are reading authentic student texts, building on my last post, I suggest you lean heavily on three things:

  1. A "text brew" of different, course-specific readings that students must synthesize into one piece of writing.
  2. Students drawing on their personal writing and learning experiences.
  3. A dialogic writing environment on message boards.

This approach won't solve all your problems, and it takes me back to one of the first posts I ever composed for this space, back in September 2005: using message boards!

During my winter '23 course, I realized what most of us have: When assigning writing assignments, we cannot now do what we've always done. Short response papers, canned/recycled essay topics, papers that are just exams-disguised-as-writing--AI chatbots can easily, and sometimes expertly, respond to them. We need to be nimbler and more innovative.

Make no mistake, these assignment adjustments will require preparation time. We need to find texts that speak to a topic you want to discuss, develop a prompt about those texts, and then have the students write and respond to each other on threads. However, we may find ourselves increasingly replacing the time we spend assessing/grading "big" papers with more closely looking at these types of student texts.

I taught a first-year writing course in the winter, and I ran some of my prompts through ChatGPT; I was struck by the inability of the app to respond. In simple terms, I knew who was on the other end: No one.

For instance, I used this combination of texts several times:

  1. Two articles from the excellent anthology series Writing Spaces, which contains chapters about writing written for a student audience.
  2. An article from The Atlantic Monthly magazine; I had assigned an issue of the magazine as a class text.
  3. A student-authored reading from something special we have at Drexel: A long-standing annual publication The 33rd, which features award-winning student writing and faculty writing in various genres and disciplines.

I shook up this "brew" of texts and wrote message board prompts asking this of students:

  1. Address a specific aspect of writing discussed in the Writing Spaces chapter.
  2. Use the Atlantic and 33rd pieces to provide specific examples of that aspect of writing.
  3. Describe their own specific writing experiences in the context of this conversation.
Of course, this was happening on message boards, so part of the writing requirements were that they respond directly to each other in context. Note that the posts are evidence-driven; I even ask that they provide brief works cited/resources lists. Also, I emphasized that message board posts are informal: I don't expect them to be mechanically perfect.

On a given thread, students wrote substantively, sometimes easily surpassing 1,000 words in multiple posts.

My prompts aren't foolproof, but based on my sample runs, ChatGPT didn't have command of Writing Spaces texts and had zero "knowledge" of The 33rd, so its efforts to respond to my prompts were awkward at best. Because of my "informal" guideline, the AI-generated texts also stood out and looked odd.

What I'm suggesting isn't perfect. Right now, nothing is. But tilting my class away from "traditional" papers to dialogic writing helps me feel that when I look at student writing, what I get is, well, theirs.

Labels: , , ,

Monday, January 30, 2023

ChatGPT and online writing environments

Like many of you, I've been inundated with information about ChatGPT. For some, the sky is falling. For others, these generative AI language modeling technologies represent opportunity.

For many of us in the writing instructional world, the response is kind of this: Told ya so.

ChatGPT is more powerful than other AI apps, stretching way back to that old chatterbot Eliza. It can wow and amaze--to a point. If you ask it canned school writing topics/prompts, you'll get back canned responses. What's been surprising to me is how many people will say, "This looks like what a student could have written." That depends on what you've asked those students to write.

If you put students in conversational, multi-audience writing environments in which they write to discuss a variety of texts, not to prove knowledge mastery (or even that they've simply read), even another human trying to help a student cheater would struggle.

Such environments are the foundation of many online writing courses (OWCs).

In online writing instruction (OWI) workshops I conduct for faculty, I have a slide for when people inevitable express concerns about about plagiarism in the type of asynchronous OWCs we focused on. The slide would initially be blank, and then using a Powerpoint animation, a big red heart appears. "Colleagues," I would say, "If someone else is doing all this work, it's gotta be love!"

My point was that the work being done in my asynchronous writing courses was so extensive and involved that anyone who tried to replicate it, week in and week out, for another student would basically have to take the whole course in that student's stead. They would really have to like that person! I guess I could have alternated my big heart with a dollar sign, because such substitute studenting could only be done for love or money.

We may be worried about ChatGPT's ability to respond to writing questions/prompts such as "Describe Hamlet's indecision" or "provide reasons why you are for/against gun control," but teachers should always be concerned about student responses to these kinds of prompts.

To paraphrase crudely: It's the assignment, stupid. (I wrote about this in lay terms on my other blog.)

If you have students writing in multi-audience, multi-text environments such as class message boards, you're going to provide them with very specific rhetorical situations that it would be difficult for someone to engage in without clearly appearing out of place.

I'm back almost to day one on this blog: Use discussions/message boards and other asynchronous environments. Here is a brief example of a prompt I used in my first-year argument writing course plugged into ChatGPT and the AI model's response; I include comments in the text:

Peer review challenges [*My prompt]

Dear class, 

DePeter’s chapter focuses on providing “meaningful” peer review (Reid mentions review too). What has your experience been like through your school years as a peer reviewer? In terms of being a reviewee, what sort of specific revision challenges do you tend to encounter during the drafting process? How do you work with those challenges? 

I asked you this week to read Mai. What peer review advice might you offer this author? 

Prof. Warnock 

ChatGPT response 

As a language model, I do not have personal experience as a peer reviewer or reviewee [*AI honesty]. However, I can provide information on best practices for peer review. 

DePeter and Reid [These are readings from Writing Spaces, a superb anthology series designed for student writers to help them think about and study writing as subject matter] emphasize the importance of providing meaningful feedback during the peer review process [*Peer review is only mentioned tangentially in Reid's chapter]. This includes focusing on the strengths and weaknesses of the manuscript [*The word "manuscript" doesn't appear in Reid or DePeter--it's a jarring, out-of-context word] , providing specific and actionable suggestions for revision, and approaching the review with a constructive and supportive attitude [*This is nice,  highly general stuff about peer review but not specific to DePeter; in short, anyone could have written it, and it doesn't answer the prompt]

In terms of specific revision challenges, authors may struggle with organization, clarity, and coherence of their manuscript [*Again]. They may also have difficulty with grammar and punctuation. To work with these challenges, authors should review their manuscript for these issues and consider seeking feedback from beta readers or a writing group [*Both of these terms are also jarring and acontextual considering the readings. These comments are, bluntly, much less interesting that what actual students said, some of whom had visceral, emotional experiences with peer review].

As for offering advice to Mai [*Mai is an award-winning first-year student essay found in Drexel's in-house annual publication, The 33rd], I would recommend that they [*They AI smartly chooses "they"] focus on the organization and coherence of their manuscript [Again, "manuscript"; Mai's piece is clearly an essay or even paper]. They may also consider seeking feedback from beta readers or a writing group [*Again, both of these terms are jarring considering the readings] to help identify any areas that may be confusing or unclear. Additionally, It's important for them to be open to feedback and willing to make revisions based on the feedback they receive. [This advice is obviously not specific to Mai's essay. Students in my course provided advice about her use of personal reflection as well as the kinds of sources she used. Responses like this make it evident the respondent did not read Mai's essay].

ChatGPT is indeed amazing and teachers are finding great applications for it: I've already collected scores of links. Teachers, it won't ruin our lives--but it may sharpen our teaching, if we let it.

Labels: , , ,

Wednesday, November 30, 2022

Tough debate in asynchronous environments

Here's nothing new: It can be tough finding common ground with people nowadays. (Hopefully Thanksgiving last week wasn't yet another proof of that for you.)

With bad models from Facebook to cable news to the holiday dinner table, where can our writing students refine the ability to have reasoned debate? A bloodless course paper with no real audience (i.e., writing for the teacher only) isn't the best platform. In-class, face-to-face discussions about sensitive topics can fall flat, even in the hands of skilled teacher-moderators; outrage is possible, but silence and hesitancy are more likely.

I was thinking about how students are skilled in digital communication, mainly through social media, but how seldom they have "curated" or moderated debates in such modalities. This thinking, coupled with my preparations for my winter asynchronous first-year argumentative writing course, led me back to a familiar place: Asynchronous discussion forums.

I wondered if those common, humble forums, in the context of a course, could be especially productive places to practice tough debate. 

I've said in the past that teachers are often intimidated by students' digital experience and fluency, and, yep, they are social media wizards. But they come to us often never having had an academically/pedagogically moderated argument in writing in which they had to articulate carefully points and perspectives. They seldom write in in-depth conversations.

This week, as we are concluding our course The Peer Reader in Context (a writing-intensive course taken primarily but not solely for potential Drexel Writing Center tutors), my students and I read "Interactional dynamics in on-line and face-to-face peer-tutoring sessions for second language writers" (1). In this Journal of Second Language Writing article, the authors reviewed transcripts of face-to-face and online tutoring sessions in second language contexts and discovered, surprisingly (especially in 2006), that it was the face-to-face sessions that seemed "to lend themselves to more hierarchal relationships in which tutors take control of the discourse" while "on-line interactions" appeared "to lead to more egalitarian relationships, with clients controlling the discourse more."

Why am I mentioning this article? Because in emphasizing that students may feel more at home on their own "turf" online, I thought that students' general digital familiarity coupled with a teacher-moderator creates a learning space in which students debate tough topics respectfully, thoughtfully, and in writing.

In previous first-year asynchronous argumentative writing courses, I would introduce at least one real hot-button topic thread during the term. It wasn't a mandatory thread, so students who didn't want to discuss that topic didn't have to. But now, when perhaps cultural temperatures are higher, I may use these forums more frequently as a renewed way to find our way into dialogue.


1) By Rodney H. Jones, Angela Garralda, David C.S. Li, and Graham Lock. Published in the volume 15 issue on pp. 1-23.

Labels: ,