Saturday, October 25, 2008

How do we know a Faculty Professional Development program is effective?

Lessons learned from a one-day workshop, by Alvaro H Galvis, director CETL at WSSU.

I had the opportunity to participate in a NCSPOD/POD 2008 pre-conference workshop on evaluation of professional development efforts, co-facilitated by Dr. Cindra Smith and Michelle DeVol, coauthors of the Evaluating Staff and Organizational Development (2003, retrieved October 24, 2008) handbook. I got the following three key ideas:

  1. Not every professional development program requires the same level of evaluation. Using Kickpatrick’s Four Levels of Evaluation model (1994, retrieved October 24, 2008) Smith and DeVol suggested to collect always data on reactions to the program (level 1) and to move into deeper levels of evaluation (level 2 = learning, level 3 = transfer, level 4 = results) when the professional development effort merits that. For instance, a brownbag lunch is worth knowing who came and whether s/he liked what s/he heard, but a summer institute with fall and spring follow up merits knowing also what people learned, how are they applying it, and what is the impact on students’ learning.
  2. Professional development program evaluation should start with its design (“start with the end in mind” they say), as long as a clear understanding of why it is convenient / necessary to offer the program will lead to a clear definition of outcomes and strategies to evaluate whether they have been achieved.
  3. Evaluation reports serve several purposes, being the most usual to demonstrate or justify what was done. Smith and DeVol have found “portraits of engagement”, i.e., one-page executive summaries are the most important dissemination piece of evaluation reports, since in many cases that is what people read from a report and what motivates (or not) further reading.

Workshop facilitators suggested complementary resources for professional development program evaluators, such as the following:

No comments: