Research roductivity is most often measured by people who do not have the ability to distinguish good papers from bad papers. Such measurements therefore tend to devolve into mechanical algorithms that count the number of publications and the impact factor of the journal where the research was published, rather than sensible arguments about the merits (or demerits) of the researcher. Evaluating a researcher therefore becomes a "numbers games", where a researcher with a higher number of small papers easily outranks another who has a smaller number of longer, more complex, publications. The race to the "smallest publishable piece of research" increases the number of papers (arguably "good" to the researcher who needs a "good" evaluation) but makes accompanying the literature more difficult, as one has to keep track of ever increasing numbers of papers with dwindling individual importance. It also detracts from the value of research being reported: in my example today, two papers report computations of very similar compounds. The only difference is the interchange of a nitrogen with a phosphorus atom.
A single paper would have been much more useful and important, but research managers would count that as less productive :-(
PS: I happen to disagree strongly with the suggestion, in these papers, of the existence of intramolecular H-bonding, as the angles involved are too small for H-bonds.
Subscribe to:
Post Comments (Atom)
This comment has been removed by a blog administrator.
ReplyDelete