Educational Consulting, Learning, Pedagogy

Understanding Course Evaluations

At the end of almost every college course, almost all colleges and universities in the United States have students fill out a student course evaluation, in which students fill out a form that gives the school their feedback about the class, the instruction, and the textbook. There are a recent interesting studies out examining their effectiveness, including one study out of UC Berkeley evaluating the validity of student course evaluations in measuring teaching effectiveness. The results are similar to the results of the many other studies conducted in the past: student course evaluations are not reliable indicators of teacher effectiveness:

Student ratings of teaching have been used, studied, and debated for almost a century. This article examines student ratings of teaching from a statistical perspective. The common practice of relying on averages of student teaching evaluation scores as the primary measure of teaching effectiveness for promotion and tenure decisions should be abandoned for substantive and statistical reasons: There is strong evidence that student responses to questions of “effectiveness” do  not measure teaching effectiveness. Response rates and response variability  matter. And comparing averages of categorical responses, even if the categories  are represented by numbers, makes little sense. Student ratings of teaching are valuable when they ask the right questions, report response rates and score distributions, and are balanced by a variety of other sources and methods to evaluate teaching.

What do student course evaluations measure, then? The authors of this study summarize the findings of previous studies here:

  • Student teaching evaluation scores are highly correlated with students’ grade expectations (Marsh and Cooper 1980; Short et al. 2012; Worthington 2002). WHAT THIS MEANS:
    • If you’re an instructor and want high course evaluations, pass out As like candy.
    • Adjunct instructors, having the least job security and the most job retention anxiety, are most likely to inflate grades to get high course evaluations.
    • Net result: over-reliance on adjunct instructors and on student course evaluations to evaluate teachers leads to grade inflation and low course rigor; i.e., poor educational quality.
  • Effectiveness scores and enjoyment scores  are related. In a pilot of online course evaluations in the UC Berkeley Department of Statistics in Fall 2012, among the 1486 students who rated the instructor’s overall effectiveness and their enjoyment of the  course on a 7-point scale, the correlation between instructor effectiveness and course enjoyment was 0.75, and the correlation between course effectiveness and course enjoyment was 0.8.
    • WHAT THIS MEANS: If students enjoyed the course, they will rate it highly. But enjoyment by itself isn’t a measure of learning. The instructor may just be a good performer.
    • Conversely, lack of enjoyment doesn’t mean the student didn’t learn. The types of assessments and activities that promote long term retention, in fact, lead to low course evaluations. The practices that students like the least actually help them learn and retain the most. See the link right above.
  • Students’ ratings of instructors  can be predicted from the students’ reaction to 30 seconds of silent video of the instructor: first impressions may dictate end-of-course evaluation scores, and physical attractiveness matters (Ambady and Rosenthal 1993).
    • WHAT THIS MEANS: student course evaluations are, more than anything else, superficial measures of instructor popularity rather than teaching effectiveness.
  • Gender, ethnicity, and the instructor’s age matter (Anderson and Miller 1997;  Basow 1995; Cramer and  Alexitch 2000; Marsh and Dunkin 1992;  Wachtel 1998; Weinberg et al. 2007; Worthington 2002).
    • WHAT THIS MEANS: student course evaluations are, at worst, racist, elitist, ageist, and sexist superficial measures of instructor popularity.

So how do we rate teaching effectiveness? I’d recommend the following:

  • Worry less about evaluating the teacher for promotion and focus on gauging effectiveness for the sake of seeking out the most effective strategies for that specific student population.
  • Rely in part on peer evaluations — teachers in the field conducting this evaluation. Field specific knowledge matters, as teaching isn’t just a matter of technique, but of careful selection of content.
  • We still do want to hear from students, of course, so use course evaluation tools that focus on teaching effectiveness, such as those provided by the IDEA Center.

Just for the record, I’ve always been an engaging instructor who generally gets high course evaluations, so I’m not worried about myself here. I am, however, worried about how effectively students are being educated. Reliance on student course evaluations, at present, is working against educational quality.

You can read the study below:

Cost of Degree, Educational Consulting, Majors and Areas of Study, Pedagogy, Understanding the Market

Podcast: James Rovira and the Anazoa Educational Project on Punkonomics

I’ve blogged quite a bit about a number of higher education topics, but what’s my own vision for higher education? I spent some time with Dr. Beni Balak, Professor of Economics at Rollins College, on the phone for his Punkonomics podcast to discuss the Anazoa Educational Project and my vision for higher education. Follow the link to listen to the podcast.

Educational Consulting, Learning, Machines, Pedagogy, Technology

Why You Should Take Notes by Hand

Because of the way that human beings interact with laptops, studies indicate that students who take notes on laptops don’t learn nearly as much as those who write out their notes on paper.

This learning differential doesn’t exist only because students are distracted while working on their laptops. It’s actually the use of the laptop itself. Students taking notes on a laptop attempt to capture everything that’s being said, so that they’re acting more like passive recipients of information — like stenographers — rather than thinking about the lecture.

On the other hand, students who take notes on paper have to think about what they’re writing down because they can’t possibly capture everything. That means they’re more cognitively engaged with the lecture material than the laptop note taker. Even a week later, students who took notes on paper scored higher on tests for both conceptual and factual content than laptop note takers.

But in addition to this difference, students taking notes on laptops are indeed distracted by other things on their laptops: according to other studies, 40% of the time students are looking at non-course related material while in class if they’re using a laptop, like Facebook, email, and chats.

These bad practices disseminate throughout educational institution. Because students aren’t learning as much, they complain about the quality of their education (a result noted in one study). School administrators listen to these student complaints and attempt to address outmoded instructional methods.

To appear innovative, they then spend a lot of money on educational technology that puts learning onto a screen. Schools then have to spend millions of dollars on this tech so have to adjunctify the faculty pool, which further degrades instructional quality. The problem is not that adjunct instructors are bad instructors, but that they are badly paid and badly overworked.

As a result, we have a higher educational system that everyone says is “broken” because of “outmoded instructional methods” but that no one thought was “broken” until relatively recently (say, the last fifteen to twenty years).

The real fix: shut off the laptop and take notes on paper. Just read the study.

Some great points made during a LinkedIn discussion about these ideas:

  • Handwriting on a tablet may well be a good middle way between typing on a computer and handwriting notes on a pad and paper, if you can get a good app for that. I haven’t had any luck, but this tech is continually evolving. I get the impression others have. I use an iPad Air.
  • There is neuroscience supporting the idea that your brain processes things differently when handwriting as opposed to typing, so this difference may also be related to how our brains and bodies work together. In fact, different areas of the brain are activated with printing out by hand compared to writing in cursive, so even different types of handwriting matter.
  • The study is just about one specific activity — note taking — so of course wouldn’t necessarily apply to group work and other tasks that require more engagement than passive recording of notes on a keyboard.
  • There are always exceptions. Some students need the support provided by electronic devices when note taking. Let’s just be careful not to define the rule by its exceptions.

College Writing, Learning, Machines, Pedagogy, Technology

Print vs. E-Books

We all have to use what works best for us, but it’s also smart to pay attention to some of the latest research, which indicates that reading print books rather than electronic books is better for us in several ways:

  • Print books lead to increased comprehension. The tactile experience of reading a printed book actually matters. Check out the research.
  • Related to the above, we’re more likely to read every line of printed material. When we read e-books, we tend to read the first line and then just the words near the beginning of the line after that.
  • We lose the ability to engage in linear reading if we don’t do it often.
  • Reading printed material for about an hour before bedtime helps us sleep. Reading ebooks keeps us awake.

I read both e-books and print books, and I’m grateful for my e-readers (really, the apps on my iPad) when I’m traveling. It’s easier to carry 1000 books on one iPad than it is to carry five in a backpack. I relied a great deal on an app called iAnnotate while I was reading for my last published scholarship, the introduction and chapter 1 of Reading as Democracy in Crisis. I can’t tell you how useful the app was to me: it allowed me to highlight, underline, and annotate dozens of .pdf files and then email my annotations to myself. Imagine having all of the text that you highlighted in all of your books gathered up in searchable electronic form.

Even with this experience, I know what the researchers mean by the tactile elements of memory, the feeling of better control over your media with pages. I do remember where to find things in books by their physical location in the book, which isn’t possible with an e-reader: you can only search terms and page numbers. I think the point here isn’t which search method is more efficient, but which reading style engages more of the brain by engaging more of our physical senses. So I appreciate ebooks and use them quite a bit, but for educational purposes, especially in K-12 environments, we should use them carefully and deliberately, being aware of their drawbacks as well.

I’d like you to consider a few things about the way we developed our technologies:

  • The people who developed our technologies didn’t have our technologies. In other words, the people who built the first computer didn’t have computers.
  • The engineers who landed men on the moon did most of their work on slide rules.
  • The computers that they did at first use had less computing power than our telephones.

So we should use the best technology available to us while being aware of its limitations. Don’t dump your printed books. Continue reading in multiple media, and make sure your children especially regularly read printed books.