Given the impact performance evaluations have on scholars’ careers, it is troubling to know that the judging process can lead to unreliable conclusions. To shed some light on the subject, Donald R. Bacon and Pallab Paul of the University of Denver, independent scholar Kim A. Stewart, and Kausiki Mukhopadhyay, also of the University of Denver, present a new evaluation model to clarify expectations and improve fairness in their article, “A New Tool for Identifying Research Standards and Evaluating Research Performance,” published in the newly released August 2012 issue of the Journal of Marketing Education. The authors explain:
Much has been written about the evaluation of faculty research productivity in promotion and tenure decisions, including many articles that seek to determine the rank of various marketing journals. Yet how faculty evaluators combine journal quality, quantity, and author contribution to form judgments of a scholar’s performance is unclear. A mathematical model of faculty judgment is presented that estimates a scholar’s research productivity that is surprisingly consistent with actual faculty evaluations. The model does not replace human judgment in evaluating a scholar’s research performance, but the model enhances clarity and objectivity in the evaluation process. The method is demonstrated with marketing faculty at one university.