Exercise your academic freedom!  

Praise, complain, inform, amuse, complain -- you are free to write what you will. 

There are only two rules:  (And if you don't like 'em, you're free to go create your own blog site.)

1. Postings should relate in one way or another to the  Michigan Business School.  So if you've got a beef about, say, Canadian bacon, take it somewhere else. 

2. Comments are open and unmoderated, except that I will remove  obscene or (excessively) abusive remarks (if and when they come to my attention).  For the most part, I'll leave it up to you, the readers, to "discipline" each other through your comments and responses. 

Finally, this is an experiment.  I have never done one of these blog things before and know next to nothing about the technology.  So if you have suggestions for improvements, and you have the time and knowledge to implement them, we should talk.   

 



UMBS-BS*   

 


An (unofficial) site for University of Michigan Business School students, faculty and staff to express thoughts and opinions of (possible) interest to the UMBS community as a whole.


*
"Blog space" (in case you thought BS might stand for something else) 



Wednesday, October 2,  2002 :::


Journal Rankings and Faculty Evaluations

The idea of measuring and rewarding performance is simple and appealing. It is a cornerstone of principal-agent theory in economics and has been imported by some organization theorists under the banner of "New Pay" or "Strategic Pay." But one of the things we know is that naive or overly simplistic incentives schemes can be counterproductive. The problems have been formalized in the agency literature in the form of "multi-task agency" models, but the principles have been well-known since at least the 1960s to scholars of the New Soviet Incentive Scheme.

In a nutshell, the problem is that the actors who are the objects of incentive schemes are sophisticated and can be expected to "game" the incentive structure, both by altering their behavior in undesirable ways and by manipulating the structure itself. (Deleterious effects of performance pay schemes have also been cited as factors in the recent spate of business scandals.)

The experience attempting to create a formal journal ranking for the business school almost a decade ago provides a case study in problems of designing effective incentives for inherently complex and subtle tasks. The job of creating a journal ranking in this earlier episode was assigned to the Research and Publications Committee, on which I was the Business Economics representative at the time. As a first step, members of the committee solicited ratings of journals in their fields from their respective groups, with each journal assigned a rating of A, B or C.

Some rather predictable (I would say) things followed.

The initial list of A journals for each area was (i) long and (ii) heavily populated by the journals in which faculty in the respective groups had previously published. Now, there are two possible explanations for this. One is that our faculty at the time were all very good and that quality was reflected in the quality of the journals they had published in. The other is that no one wanted to acknowledge publishing in second- or third-rate journals, which could be avoided by making sure that one’s previous publication outlets were included on the A list. Undoubtedly, a little of both was at play. The problem is – and note that this is the problem that motivates the creation of such lists in the first place – it is hard for people outside of a particular field to know what the quality of a particular journal really is: How were, say, non-accountants supposed to know whether a particular journal listed as an A accounting journal was truly an A journal or just one in which, for whatever reason, our accounting faculty happened to have a concentration of publications? Then there was the problem of cross-discipline comparisons: Are A journals in, say, Corporate Strategy just as prestigious as the A journals in Organizational Behavior? Who’s to say?

To give you a sense of the problem, I dug out of my files a preliminary ranking from this earlier effort dated Feb. 15, 1994. Here are some of the numbers:

Group

Number of listed A journals

Ratio of A journals to all journals listed

Accounting

10

0.29

Business Econ.

23

0.34

Computer & Information Systems

27

0.30

Corporate Strategy

14

0.52

Finance

16

0.64

International Business

17

0.55

LHC

   

Communications

9

0.26

Law

3*

0.38

Marketing

19

0.46

Organizational Behavior

3*

0.38

Operations Management

10

0.43

Statistics & Management Science

15

0.38

*Clearly these folks just didn’t understand the game in this early round.

As you can see, the number of A journals listed by area varied from 3 (OB and Law) to 27 (CIS), and the proportion of all listed journals rated A varied from 26% (Communications) to 64% (Finance).

This, of course, was just a first round. When Business Law saw how many A journals CIS listed, many more A journals in Law began to emerge. And when we economists saw what Finance and Corporate Strategy were counting as A journals, you can bet we reassessed our own judgement of what an A economics publication was. And so on and so on.

To stem the ensuing rating hyperinflation, each group was thereafter instructed to limit their list of A journals to 5. But this caused its own problems. The American Economics Association's Index of Economic Articles covers 300 journals, while the Social Science Citation Index (as of 1999) included in its "impact" ranking 160 journals under the heading Economics. The comparable SSCI figure for all business and finance journals, combined, was 84. Clearly, five journals represents a significantly larger fraction of potential outlets in some areas than others.

The competition to get into those five journals also varies considerably across fields. The American Economics Association has approximately 22,000 members, "over 50% associated with academic institutions." The publication Who's Who in Economics (3rd. ed.) estimated the number of "publishing economists" worldwide in 1999 at between 40 and 45 thousand. The American Finance Association, by comparison, claims 3,470 individuals as members, 2,941 employed by academic institutions. As of the end of 1998, the American Accounting Association listed 8,056 members, 6,913 as "academic members." The website of the Academy of Management claims 12,517 members representing 82 countries, with 66% – or about 8,000 – academics. The result: Marketing Science, a marketing group A journal, had 115 submissions in 1994 and accepted approximately half of those for publication. The American Economic Review receives about 1,000 submissions per year and publishes approximately 100 articles, implying an acceptance rate of about 10%.

Clearly, success, as measured by publications in a "top-five" journal, means different things in different areas.

Many other specific problems and inconsistencies arose in the course of discussing the ratings back in 1994. How, for example, would the publication of a finance professor in an A-rated accounting journal (not on the finance list) be treated? In at least one case, a journal ranked as an A journal by one area was rated as a C journal by another area. Would publication in this journal form the basis for tenure and raises for a faculty member in first group but a reason for termination and relative pay cuts for faculty from the other? And what to do about one area’s "flagship" journal, the quality of publications in which was widely regarded – even within that area – as erratic at best? Last but not least were the distortions in ratings resulting from within-group politics: the theorists seeking to gain at the expense of empirical researchers by packing the A list with more theoretically oriented journals; the tensions between methodologists and applied researchers, and between psychology- and economics-based research, in marketing; the corporate governance scholars versus the investment analysts in finance.

Is there a way to get around these problems, some more objective way to rate journals not so subject to manipulation and game playing? Most of the candidates have their own problems. Journal acceptance rates could, for instance, for the basis for cross-disciplinary comparisons of the selectivity of journals. But reliance on acceptance rates would penalize scholars for working outside the mainstream; it is often far easier to publish in a mainstream journal by tweaking another author’s model than by striking out in a truly original direction.

Another candidate for rating journals more objectively would be to use something like the previously mentioned Social Science Citation Index’ "Impact Factor" based on citation rates. The prospective influence of publications in the Academy of Management Review (Impact Factor: 2.781) could then be compared with the prospective influence of publications in the Journal of Political Economy (Impact Factor: 2.608) or the Journal of Finance (Impact Factor: 2.137) or the Accounting Review (Impact Factor: 0.816).

Of course, most articles published in"high impact" journals never get cited, while some articles published in relatively "low impact" journals break new ground and become classics. Which raises the question: Why not use citations to the article itself – rather than to the journal in which it is published – as a measure of an article’s importance and of an individual’s overall scholarly achievements? The inevitable response to this suggestion is that citations are an imperfect measure. And, indeed, they are. But, as I have just described, so are journal ratings. And unlike journal ratings, a substantial scholarly literature exists examining the relation between citations and various other measures of success and contribution in academia.

The real problem with citations as a performance measure is that the vast majority of academic articles never get cited. Faculty who have devoted their careers to producing large quantities of publishable but low-impact research do not now want the success criteria changed in a way that would reward influence over quantity.


Back to the main page




     
 



_______________

 

VERITAS

 DITAT*

___________



Powered by Blogger Weblog Commenting by HaloScan.com


  * "truth enriches"

 

 

 

 

 

INSTRUCTIONS:

To read or add comments on an existing topic: Click on the corresponding Comments link. A separate window should open with both current comments and a space to add a new comment. Although fields are provided to enter your name, e-mail address and a URL, whether and how you identify yourself is up to you.  Leaving all three spaces blank will create an anonymous comment (and increase your priority for deletion if the comment is deemed out of bounds (see the rules at left)). Alternatively, you may enter a nom de blog in place of your real name, which helps others reference your post -- and lets you to establish a virtual reputation!  

To suggest a new topic:  You can either 

1. e-mail me with your suggestion (at semasten@umich.edu) (PLEASE put UMBS-BS in the subject line of your e-mail); or
2. add a comment including your suggested topic to the SUGGEST NEW TOPICS section at the bottom of the page.

  since 9/6/02