In response to Nate Kornell’s blog post:
When you measure performance in the courses the professors taught (i.e., how intro students did in intro), the less experienced and less qualified professors produced the best performance. They also got the highest student evaluation scores. But more experienced and qualified professors’ students did best in follow-on courses (i.e., their intro students did best in advanced classes).
Bottom line? Student evaluations are of questionable value.
These results are… surprising. Well, not that surprising, since we know that students don’t always have the right idea of what constitutes learning, i.e., the difference between being able to finish a homework problem and, say, having the concepts organized in your mind to point that you could teach it to others. But for student evaluations and long-term performance to be negatively correlated? That implies the students’ cognitive ability to evaluate (Bloom’s Taxonomy tier 5/6) their own learning and/or the professor’s performance is critically flawed.
Now the blog post linked to wants to read something into this. Actually, the authors of the original paper put out three possible explanations for these results:
- More experienced teachers are less likely to teach to the test, and instead draw on a broader curriculum and aim for deeper understanding.
- A student who has an easy time (which typically gets rated higher) may develop poor study habits or not see where they are lacking.
- Students who get a poor grade because of poor teaching will work harder to get a better grade in later classes to compensate.
But these are all conjecture, and I’m sure you can write your own explanation of why this occurs. My personal hypothesis is that students evaluate a professor based on what they think they learned as opposed to what they actually did learn, with a tilt factor for how entertaining/interesting the professor makes class. Whatever the actual reason, I’m now more suspicious of student evaluations.