Observing and evaluating online writing courses, part 2
Last July, I wrote some thoughts about peer observations and evaluations of online writing courses (OWCs). I promised -- for you and me -- some further
examination of this topic. So, last week, at Computers & Writing at St. John Fisher College in Rochester (a well-run and enjoyable conference -- thanks to the organizers and local hosts) my
colleague from Kent State Mahli Mechenbier and I presented about conducting such evaluations.
In my talk I was able to show, I think, some progress on my initial articulation of an Elbowesque “movies-of-the- mind” approach to conducting peer evaluations of OWCs. I opened my session by giving the audience members a “task”: “During my session, take some notes about what happens and then imagine you would turn them into an observation letter you would give to me about the session.” I circled back to the "task" at the end. My point was that observations of things like teaching (or conference talks) are always subjective and related to the relative expertise of both evaluator and evaluatee. By understanding that dynamic, we can avoid the judgment-driven process that has done a bizarrely paradoxical thing: oversimplified evaluations while overcomplicating the evaluation process.
The “how” of this “movies-of-the-mind” approach is straightforward, I said: I simply write a letter to the observed faculty member describing what I see and experience in the course. With an OWC, the whole course is laid out in front of an evaluator, so the evaluation can be broader than the customary one-shot onsite visit. I did mention, though, that although I could see most of the course, I still want the teacher to guide me through, showing me only what they think appropriate.
By using this approach, the evaluator avoids reductive rubrics and metrics and instead simply narrates what they see from a colleague's teaching.
Now, politics do emerge in the process. We conduct these evaluations in the context of the academic hierarchy. Mahli emphasized this in her session, pointing out evaluation “issues” that include rank, power disparities between faculty, lack of a true peer relationship, being unknown to an evaluator, and that the whole process can be viewed as a “waste of time.”
As she does in a lot of her work, Mahli focused on contingent faculty. Especially because so many contingent faculty are asked to teach online, the issue of how such faculty are evaluated and by whom is a big topic, she said. Your campus might use Quality Matters or even (however unlikely) have developed its own approach/methodology of evaluation, but who has the expertise to conduct an evaluation -- especially if all of the online teachers are contingents but the institution requires tenured folks to evaluate!
I ended my talk with a big zoom-out, as the reductive evaluations we do in our classes continue up the ladder until we're creating stupid, standardized test-based metrics of entire schools. Such evaluations have had devastating impact to communities.
We discussed these issues in the direct context of the C&W crowd, and we're going to focus on the administrative side of OWC evaluations in Council of Writing Program Administrators conference in Raleigh in a month and a half.
In my talk I was able to show, I think, some progress on my initial articulation of an Elbowesque “movies-of-the- mind” approach to conducting peer evaluations of OWCs. I opened my session by giving the audience members a “task”: “During my session, take some notes about what happens and then imagine you would turn them into an observation letter you would give to me about the session.” I circled back to the "task" at the end. My point was that observations of things like teaching (or conference talks) are always subjective and related to the relative expertise of both evaluator and evaluatee. By understanding that dynamic, we can avoid the judgment-driven process that has done a bizarrely paradoxical thing: oversimplified evaluations while overcomplicating the evaluation process.
The “how” of this “movies-of-the-mind” approach is straightforward, I said: I simply write a letter to the observed faculty member describing what I see and experience in the course. With an OWC, the whole course is laid out in front of an evaluator, so the evaluation can be broader than the customary one-shot onsite visit. I did mention, though, that although I could see most of the course, I still want the teacher to guide me through, showing me only what they think appropriate.
By using this approach, the evaluator avoids reductive rubrics and metrics and instead simply narrates what they see from a colleague's teaching.
Now, politics do emerge in the process. We conduct these evaluations in the context of the academic hierarchy. Mahli emphasized this in her session, pointing out evaluation “issues” that include rank, power disparities between faculty, lack of a true peer relationship, being unknown to an evaluator, and that the whole process can be viewed as a “waste of time.”
As she does in a lot of her work, Mahli focused on contingent faculty. Especially because so many contingent faculty are asked to teach online, the issue of how such faculty are evaluated and by whom is a big topic, she said. Your campus might use Quality Matters or even (however unlikely) have developed its own approach/methodology of evaluation, but who has the expertise to conduct an evaluation -- especially if all of the online teachers are contingents but the institution requires tenured folks to evaluate!
I ended my talk with a big zoom-out, as the reductive evaluations we do in our classes continue up the ladder until we're creating stupid, standardized test-based metrics of entire schools. Such evaluations have had devastating impact to communities.
We discussed these issues in the direct context of the C&W crowd, and we're going to focus on the administrative side of OWC evaluations in Council of Writing Program Administrators conference in Raleigh in a month and a half.
Labels: course evaluations, course observations, teaching writing online
2 Comments:
"Reductive rubrics" Okay, will let that pass - heh heh.
I'm curious, can you elaborate on the "devastating impact" claim?
As for the general conundrum of quality in courses and power/hierarchy issues...Disney, along with many others, solved this problem a long time ago through mystery shoppers. A move I expect accreditors will have to make at some point if the federal government keeps ratcheting up the pressure on ROI for billions of guaranteed loans.
Thanks for this comment. I should have been clearer. "Reductive rubrics" could be read as a compound noun. I meant "reductive" very much as a modifier: There are great rubrics out there. "Such evaluations have had devastating impact to communities" -- The public school vs. charter school war that is happening in this country right now is evidence of that, I think. In my little town, people pull their kids out of the high school stream strictly based on test scores. Of course, I'm also reading a bunch of Diane Ravitch lately.
I love the mystery shopper idea.
Post a Comment
<< Home