May 29, 2003

Excellence in Teaching

But for many students, evaluations of faculty members simply mirror their own evaluations. In short, whoever gives them the highest grades is loved. Affection need not necessarily denote respect. Any professor seeking popularity who draws the obvious conclusion will grade more leniently or assign less work.

The basic tendency is already there. The bubble-sheet evaluations gathered from students at end of term and used in promotions and tenure decisions account, I believe, for a good deal of grade inflation. Anything fostering an undue desire to be loved compromises the professional obligation to call upon students to work hard and rise above their present selves. We must submit to some forms of evaluation. Why add voluntary incentives to corruption?

-- Max Clio, "Learning from a Teaching Contest"


An assistant professor of history at a "small public university" explains why he withdrew from a contest in which "students choose their favorite teachers by ballot." He is harshly critical of the winner of last year's adjunct teacher award (it seems there are two awards, one for regular faculty, one for adjuncts), whom he describes as

a boisterous Spanish teacher known for throwing nacho parties in the classroom, providing students with backrubs, and flirting shamelessly. She also happens to grade with a light touch. All of this was a bit scandalous in itself, but to me the point was brought home far more vividly when four of my students arrived tipsy and flushed to our historiography seminar late one afternoon this winter. They readily confessed that the reason for their inebriation was that the Outstanding Teacher of the Year had taken her class earlier that day to the nearby Mexican restaurant to practice Spanish with the waiters. It appears they tested out useful phrases like 'Margarita, por favor.'

Backrubs?! This sounds so over-the-top that I am inclined to wonder whether his account is completely accurate. In any case, I think it's fair to assume that here are two faculty members who are no longer on speaking terms...

Anyway, his basic point strikes me as all too accurate: student evaluations are often measures of consumer satisfaction closely linked to a teacher's grading practices.

Posted by Invisible Adjunct at May 29, 2003 02:46 PM
Comments
1

I seem to remember a double blind study carried out by researches at Duke that found that evaluation tracked grades. No surprise there. A very prolific professor once told me that he doles out As to the undergraduates so that they leave him alone to write.

I will say, though, that like many first-person accounts in the Chronicle the article seemed like a very cheap shot. While the identity of the author is probably safely disguised, enough details were provided that I'm sure the faculty at the school will recognize both their institution and the offending Spanish instructor. What does it say about the Chronicle that it appears to serve as an open forum for academics to settle scores?

Posted by: A Frolic of My Own at May 29, 2003 03:16 PM
2

Suppose that a company decided to evaluate their sales reps by taking a survey of the customers. If the customer likes the rep because "he's always responsive to our problems," that would be one thing. But if the customer likes the rep because "boy, he sure does give big discounts," that would be something else entirely. It would be very difficult to develop a survey to properly distinguish the causes--even if you ask explicit questions, the "halo effect" surrounding the big-discount-giver would probably cause him to come across as better-liked.

The applicability to faculty popularity contests is obvious..

Posted by: David Foster at May 29, 2003 03:48 PM
3

I actually had a slightly different experience with evaluations -- which is in no way meant to refute what has been said, just perhaps offer a different data point.

My evals varied not so much by grade (the grade runs were pretty much the same in every class I taught, and I didn't control grading standards anyway) but by *student quality*. I got my best evals from the class with the best, brightest, most involved students -- even though they didn't all get A's, far from it.

The best information I got from evals had nothing to do with bubble-in garbage. I'm all for eliminating that stuff; it's useless. Our evals had real questions on them, and the answers were indeed useful. At least to me.

I do think student evals can be useful. But students should have to sign their names to them -- we never saw our evals until after final grades were turned in, so no worries about effects on grading there -- and should have to write real answers to real questions, not bubble in answers to vague PR silliness.

Posted by: Dorothea Salo at May 29, 2003 03:49 PM
4

I also have problems with the way student evals are used. However, I think if you work with a "nontraditional" aged population (i.e. late 20's and older), like I do, they are less likely to respect party-oriented classroom environments and empty assignments. Most of them work full time, and while they do try to trim down the amount of coursework (I don't budge on that one!), they seem to respect fairness across the board. In other words, yes, the course might be challenging, but they were fully aware of the level of work involved via the syllabi and setting things straight the first night of class. Where you seem to run into trouble is when the syllabus is super-vague, or specifics of the assignments aren't listed in writing. This is not to stay that an organized teacher will never encounter the classic "backstabbing" eval that comes from the clear blue. Most of us have had one of those babies in our careers!
As far as teacher of the year awards, etc., I don't really like them. They breed competition and animosity. I never participated in these kind of contests when I taught in the public schools and was not about to in graduate school. I wouldn't do it now! If a department wants to give out a cash prize, why not simply make it the luck of the draw, as long as entrants meet basic eligibility requirements? These prizes seem to keep people away from the real issues of low pay, inequity, and the abuse of adjuncts. Can we say "carrot?"

Posted by: Cat at May 29, 2003 04:25 PM
5

Backrubs? Listen to this:

About 1980 I had an Iranian friend who was an undergrad. Our school required that everyone take either "health" or "sex education". He decided to take sex education, for whatever reason. He was 25-30, a man of the world, and wondered why he was taking a class requiring him to watch films of men masturbating (he didn't mention that any valuable new techniques were presented).

The kicker, though, is that the teacher (F., 25-30, PhD in something or another, and as I remember, cute) was a big advocate of singles bars and told the class which one she hung out at. (This was the pre-AIDS era, but then, we had to walk to school barefoot in the snow uphill both ways in those days too.)

So I imagine that there was a way to get an A-plus in that class. Probably she did independent studies with some students too. On the other hand, some guys going for the A-plus might have ended up with B-minusses or even worse. This was pre-Viagra too.

Posted by: zizka at May 29, 2003 04:27 PM
6

My guess is that evaluations in most cases are used as a reserve resource by the powers that be: since most teachers will get bad evaluations at some point, they can be used if needed to eliminate those the powers would like to eliminate -- but I suspect there is no real consistency. Some will be dinged for bad evaluations, some won't.

The alternative is to bring pizza to class, kiss your students' bums, etc., which will assure better evaluations. The person who does this is probably politically adept in other areas as well, so overall that person will do better in the current environment.

Clearly the desirable situation is for those in charge to exercise mature professional judgment, including having their own way of knowing what's going on in the classroom aside from student evaluations.

I would definitely be pre-disposed to agree that blinded studies would probably show little correspondence between student evaluations and teaching effectiveness. But I believe studies have also shown for many years that the students who get As in college classes are not necessarily those who are most intelligent, work the hardest, or gain the most from the course.

This suggests work is still to be done in key areas of the academy.

Posted by: John Bruce at May 29, 2003 04:48 PM
7

Two points, based on my experiences teaching at three (highly ranked) liberal arts colleges over the past five years.

(i) Some of the most popular profs at all three schools are some of the harshest graders. I've actually seen very little evidence of grade-based evaluating.

(ii) At my current employer (at least till I move yet again in a month -- to a state school), grading harshness figures as a criterion of tenurability. I.e. what grades you give, and where they fall relative to others', figures in your department's tenure report. It could thus undermine the force of good evals come tenure time if it turns out that you've graded high. (This is presumably true of other promotions as well.)

I suspect that I'll discover next fall that neither (i) nor (ii) is true at state universities. But I thought it worth pointing out these counterexamples to the thrust of your post.

Posted by: Ted H. at May 29, 2003 05:21 PM
8

I agree with Dorothea about the usefulness of real questions. As a somewhat conservative bubbler on fill in evaluations myself, I am sceptical of whether much meaning can be generated by knowing that one student thinks my "respect for students" is a 4 and another thinks it's a 3. On the other hand, knowing how these figures and my grade distribution rate compared to other professors in my department and institution --that begins to become useful. (I experienced this for the first time last fall -- but now they've changed the eval format.)

Our institution has now gone to on-line, self-customizable evaluations -- they provide templates, and you can tweak the wording, questions, scales -- everything. It makes comparison between faculty impossible (a good thing?) and potentially (it's a new system) offers much better feedback on the success of the course.

But then, too, we have fairly responsible students here -- they actually write constructive comments. I remember having the job at a previous institution of sorting student evaluations and being very amused by their fixation on what their instructors wore.

Posted by: Rana at May 29, 2003 05:53 PM
9

I've always thought the best evaluation should ask one question: "How can this course be improved in the future?" Students would be forced either to give the question serious thought or write nothing. No bubbles. No numbers. Sure, there would be no way to quantitatively evaluate faculty across the university, but who said teaching and learning can (or should) be quantified.

I once TAed for a professor who never bothered to read his evaluations (he obviously had tenure). His arguments was that he knew how poorly they evaluated literature, so why should he care about how they evaluated his teaching. Although it was a snide comment, it gets to what I find to be the heart of the issue. Other experienced faculty are best qualified to evaluate teaching for review and promotion. Since faculty aren't going to take the time to observe and mentor the teaching of their peers, then quantified questionnaires will continue to be the rule.

Posted by: A Frolic of My Own at May 29, 2003 07:38 PM
10

A Frolic of My Own,
You make a good point about the cheap shot. Frankly, I doubt that Mr Clio will be able to hide his identity: if faculty at his institution recognize the school and the Spanish instructor, they will surely be able to figure out who was the author (how many assisant professors of history can there be who teach
historiography seminars and who opted out of the teaching contest?)

Zizka writes:
"(This was the pre-AIDS era, but then, we had to walk to school barefoot in the snow uphill both ways in those days too.)"

Barefoot through snow to get to the singles bars?...

Posted by: Invisible Adjunct at May 29, 2003 08:22 PM
11

I wonder if more useful results would evaluations after the students graduated - an enormous number of teachers would have been forgotten, but the remaining memories might be significant.

Or not; all I remember about one of my professors is that he could wear three different plaids and look snazzy, and that he didn't know that beer is lethal to slugs. Only the latter was relevant to French literature. (No! In hindsight, I also remember that he had snogging relationships with some of his students, in & out of the local singles bar,which I think finally lost him his job.)

Posted by: clew at May 29, 2003 09:16 PM
12

Here in Japan, it's reported that the single most accurate predictor of student evaluation scores is the age of the teacher. The younger the teacher, the higher the evaluations. Many teachers suspect that the evaluations are merely a stick for driving away older, more highly paid people.

If these were really for improving teaching, it's obvious that they wouldn't be numerical scores. A number is next to useless. This is a dead giveaway that they're a tool for enhancing the bureaucracy's power over teachers. I ignore them completely, since they're not for teachers, but for bureaucrats.

Posted by: guy at May 30, 2003 02:24 AM
13

Frolic: Your description of the tenured professor who never reads student evaluations also describes me. I've taught for a long time and handed out a lot of evaluations (readers might be interested to know that it's not unusual to be asked to hand out two and even three separate evaluation forms every semester, one from one's department, one from the student government, another from wherever... and some schools have evaluations that feature over fifty questions). It's overkill, as is so much in the academy. I still hand out the evaluations (not religiously, but most of the time), but I haven't read one in years. Once you're established as a competent teacher, you ought to be left alone.

Posted by: Chantal at May 30, 2003 06:38 AM
14

Chantal, that assumes that once competent, always competent.

Doesn't always work that way. Burnout plays a role, for one thing, and so does whatever it is that leads to deadwood-ism.

I can certainly accept less frequent evaluations for an experienced teacher, but "none at all" I can't like.

Posted by: Dorothea Salo at May 30, 2003 02:07 PM
15

Yes, children, singles bars are older than shoes.

Many, many years ago there used to be a terrifying disease called "herpes". Why doesn't anyone worry about herpes any more? Gather around and I'll tell you....

Posted by: zizka at May 30, 2003 03:03 PM
16

Dorothea: Note that I said I still hand out course evaluations; I continue to be reviewed all the time. I just don't read them. If my department informs me that my evaluations are starting to show that I'm becoming incompetent, I promise to go quietly.

Burnout and deadwoodery, by the way, are caused, not cured, by the sort of slow-death-by-bureaucracy (ask a public school teacher) of which excess course evaluations represent a tiny part.

Posted by: at May 30, 2003 05:57 PM
17

Sorry - that was me, Chantal, just then.

Posted by: chantal at May 30, 2003 05:59 PM
18

Fair enough, Chantal. Though I think you might be missing out on some honest, and honestly useful, evaluations.

One thing I'd personally like to see on evaluations is a simple yes/no "Have you ever done any classroom teaching?" The evaluations I gave improved *enormously* once I'd been in front of a class myself. They were much more specific, zeroed in on solutions rather than problems, and -- I am arrogant enough to think -- really would have helped a couple of my teachers.

For example, I recommended that a phonetics prof use an overhead rather than the chalkboard or handouts to write up problems, so that she could easily prepare transparencies in advance. Minor change, *major* savings in class time, *major* effect on in-class energy (often vitiated by the time it took to write a single problem on the board).

And no reason to trash the professor, which I didn't do.

Posted by: Dorothea Salo at May 30, 2003 10:36 PM
19

Clew: A late response, but one of my colleagues has published a study (which I have yet to read) that argues that post-graduation evaluations are a more reliable and valid gauge of teacher effectiveness than the standard end-of-semester ones. Not that this will have an impact on the process, especially in schools where the reviews do not seem to be read by anyone.

As a TA, I convinced my students to work the word "cantaloupe" into the text of their evaluations as subtly as they could, to see if this would raise any flags as the reviews made their way from the dean to the chair to my advisor to me. I doubt I need to mention that it went unnoticed.

Posted by: Alex at May 31, 2003 06:51 PM
20

I think written evaluations can provide valuable feedback. But the numerical stuff is just so much nonsense.

Many instructors would love to be able to ignore the evaluations but are not in a position to do so.

Posted by: Invisible Adjunct at May 31, 2003 09:46 PM
21

I dunno, Alex. Great idea, but some TAs are going to have trouble with that specific word...

"I really liked my TA's cantaloupes!"

;)

Posted by: Dorothea Salo at June 1, 2003 01:15 AM
22

*snerk!*

Posted by: Rana at June 2, 2003 02:44 AM