Education Podcast Commons, Measuring Teaching Effectiveness and the Wisdom of Crowds

John Dale’s blog, Autology (curious name there) points me to an SSRN article that I missed somehow.

Professor Benjamin Barton at the University of Tennessee College of Law.has uploaded a draft of his article "Is There a Correlation Between Scholarly Productivity, Scholarly Influence and Teaching Effectiveness in American Law Schools? An Empirical Study".

The spoiler is that he finds that there is NOT a correlation, but I was especially struck by the way he explains going about measuring teaching effectiveness…

"…For better or worse, teacher evaluations are the only viable way to measure teaching effectiveness for a study of this breadth. My other choices were exceedingly unpalatable: 1) attempt to gather peer evaluation data, which is rarely if ever expressed numerically, and would also almost certainly not be provided by the host institutions; or 2) use some type of personal subjective measure of teaching effectiveness, potentially requiring me to personally visit classes and make my own determination on teaching effectiveness…"

First of all, I am struck that there is so little out there in the way of measuring teaching effectiveness. You would think that for a service that costs upwards of $30k per year at some law schools, there would be a rather detailed or sophisticated system of measuring quality outputs.

I will grant that bar passage rates, grade point averages and other such things act as a kind of measure. Furthermore, the difficulty of obtaining a law degree and procuring a tenured position in a law school force a measure (though apparently unmeasurable) of quality control on the teaching process.

But this is not the point of this post.

Rather, I see a possible solution to Barton’s two alternative methods of measuring teaching effectiveness.

1) attempt to gather peer evaluation data

2) use some type of personal subjectivemeasure of teaching effectiveness

I refer the reader to a recent post of mine where I posited that law students could be used to overcome technical and man-power barriers for recording law school podcasts. If that idea has merit and many, many students step forward to record their classroom lectures – and faculty allow it – there could quickly be a large collection of podcasts from a large number of faculty available for Barton and his peers to listen to and evaluate for teaching effectiveness.

The podcasts would have to be freely available for Barton to organize peers in a kind of Legal Education Podcasting Commons (hereafter LEPC). At the worst, such a commons could have rating systems like YouTube that capture listener ratings by popularity, most commented on, most downloaded, etc. SSRN makes use of the number of downloads as a kind of proxy for quality (or at least popularity, I suppose).

With a large enough corpus of materials, the podcasts could be tagged and rated different ways or for appropriateness to different educational tasks like…

  • great for exam review
  • best explanation of this topic
  • good for students new to <topic>

etc, etc.

If students making the podcasts provide some decent metadata – like the specific topic being covered – then other second order effects become likely. Students who are having trouble with a particular topic could search the LEPC for other instructors lecturing on the same topic. I don’t think this will result in everyone listening to Arthur Miller/ LEPC will develop it’s own long tail.

Once the podcasts are out there, all sorts of layers of evaluation and metadata can be applied and this includes the faculty themselves on themselves. During my interviews with faculty podcasters, several mentioned listening to their own podcasts as a way of improving their teaching – a nice second order effect of professional self-devvelopment.

Students, of course, could rate the podcasts as well. I am not sure that the sample sizes will be large enough or that we would see a "wisdom of crowds" effect, but that’s part of the unpredictable and emergent behavior of the Internet. We get rather useful, though fairly rare feedback from students about CALI lessons. Every lesson has a button that can send an email to us and we forward useful comments on to the authors if it will help to improve the lesson.

I made a prediction in my talk at AALL (podcast or screencast) that in five years, pre-law students would be listening to law faculty podcasts (and demanding to listen to them) as a part of their decision making in choosing a law school. That is certainly a qualitative measurement.

Faculty hiring decisions could be based – in part – on the quality of the classroom lectures as podcasts. Faculty who are teaching a class for the first time could listen to more experienced faculty teach. The authors of casebooks would be incented to provide access to their classroom lectures so that adopters of their casebook could "teach like the author intended". There are all sorts of uses for LEPC.

Measuring teaching effectiveness would be just one, but improving teaching effectiveness would be the real hoped-for benefit.

Comments are closed.