Friday, March 31, 2023

AI apps and student writing worries?: An old, reliable OWT practice can help

I wish I was writing about this cool, high-tech (and maybe high-profit!) way of dealing with the AI chatbot app storm and the implications for students submitting inauthentic texts in your courses.

But I won't leave you in suspense: I'm not.

Instead, to help increase the chances that you are reading authentic student texts, building on my last post, I suggest you lean heavily on three things:

  1. A "text brew" of different, course-specific readings that students must synthesize into one piece of writing.
  2. Students drawing on their personal writing and learning experiences.
  3. A dialogic writing environment on message boards.

This approach won't solve all your problems, and it takes me back to one of the first posts I ever composed for this space, back in September 2005: using message boards!

During my winter '23 course, I realized what most of us have: When assigning writing assignments, we cannot now do what we've always done. Short response papers, canned/recycled essay topics, papers that are just exams-disguised-as-writing--AI chatbots can easily, and sometimes expertly, respond to them. We need to be nimbler and more innovative.

Make no mistake, these assignment adjustments will require preparation time. We need to find texts that speak to a topic you want to discuss, develop a prompt about those texts, and then have the students write and respond to each other on threads. However, we may find ourselves increasingly replacing the time we spend assessing/grading "big" papers with more closely looking at these types of student texts.

I taught a first-year writing course in the winter, and I ran some of my prompts through ChatGPT; I was struck by the inability of the app to respond. In simple terms, I knew who was on the other end: No one.

For instance, I used this combination of texts several times:

  1. Two articles from the excellent anthology series Writing Spaces, which contains chapters about writing written for a student audience.
  2. An article from The Atlantic Monthly magazine; I had assigned an issue of the magazine as a class text.
  3. A student-authored reading from something special we have at Drexel: A long-standing annual publication The 33rd, which features award-winning student writing and faculty writing in various genres and disciplines.

I shook up this "brew" of texts and wrote message board prompts asking this of students:

  1. Address a specific aspect of writing discussed in the Writing Spaces chapter.
  2. Use the Atlantic and 33rd pieces to provide specific examples of that aspect of writing.
  3. Describe their own specific writing experiences in the context of this conversation.
Of course, this was happening on message boards, so part of the writing requirements were that they respond directly to each other in context. Note that the posts are evidence-driven; I even ask that they provide brief works cited/resources lists. Also, I emphasized that message board posts are informal: I don't expect them to be mechanically perfect.

On a given thread, students wrote substantively, sometimes easily surpassing 1,000 words in multiple posts.

My prompts aren't foolproof, but based on my sample runs, ChatGPT didn't have command of Writing Spaces texts and had zero "knowledge" of The 33rd, so its efforts to respond to my prompts were awkward at best. Because of my "informal" guideline, the AI-generated texts also stood out and looked odd.

What I'm suggesting isn't perfect. Right now, nothing is. But tilting my class away from "traditional" papers to dialogic writing helps me feel that when I look at student writing, what I get is, well, theirs.

Labels: , , ,

0 Comments:

Post a Comment

<< Home