Hoover, a physicist at Lawrence Livermore National Lab, had tried unsuccessfully to get a paper published in two leading journals. So he added a co-author from a prestigious-sounding institute, the Institute for Advanced Studies at Palermo, Sicily, and resubmitted the work. Sure enough, the paper was accepted and published. He did this several times with the same result. But the name Hoover chose—Stronzo Bestiale—was a sly tell: In Italian, it means “giant asshole.” And yet Bestiale remains in the scientific literature, just like Hoss Cartwright. So does Galadriel Mirkwood, an Afghan hound that belonged to biologist Polly Matzinger of the National Institutes of Health. She was fed up with the use of passive voice in scientific papers, and decided to add her pup’s name to a paper in protest.
In astronomy the use of the passive voice is severely frowned upon... errr, I mean, we hates it, precious ! We hates it !
It’s tempting to laugh off some of these antics, which seem driven by ego and self-interest. But they also underscore a painful truth: Unless the evaluation of scientists—and the all-important doling out of funding—can be wrenched away from bean-counting metrics, history is likely to repeat itself. Tomorrow’s metrics gamers may come up with some other ruse, and spoofers like Morgenstern will invent the next Hoss Cartwright in response. Taking time to read and evaluate a selection of a job applicant’s papers takes far more time than plugging a bunch of numbers in to a matrix. But it’s precisely that output, not metrics, that science is supposed to be about. The agencies that fund grants and committees that hire and promote academic researchers need to get back to doing the hard job of assessing the value and quality of candidates’ scientific work rather than leaning on the crutch of overly simplified publication metrics.
It's the over-reliance on simplified metrics that's the problem here. A publication record is a useful thing, but relying on sheer numbers is a terrible idea. Hence my previous suggestion of a more nuanced journal/publication ranking system, where one could see how many papers of particular types and review quality a researcher has. Even then, to rely entirely on numbers would be a fatal mistake, because you can't quantify research quality. It's fundamentally impossible. All you can do is to try and make the current system better.
http://nautil.us/issue/42/fakes/why-fake-data-when-you-can-fake-a-scientist
Sister blog of Physicists of the Caribbean. Shorter, more focused posts specialising in astronomy and data visualisation.
Subscribe to:
Post Comments (Atom)
Giants in the deep
Here's a fun little paper about hunting the gassiest galaxies in the Universe. I have to admit that FAST is delivering some very impres...
-
Of course you can prove a negative. In one sense this can be the easiest thing in the world : your theory predicts something which doesn...
-
Why Philosophy Matters for Science : A Worked Example "Fox News host Chris Wallace pushed Republican presidential candidate to expand...
-
In the last batch of simulations, we dropped a long gas stream into the gravitational potential of a cluster to see if it would get torn...
An April 1st upload to arXiv.org of a paper by Dr Pisi-Pantz or Buster Gonad is funny, but most fake science is anything but..
ReplyDeleteAdam Synergy are those not real people?
ReplyDeleteHi Aaron Gilliland, it seems that I could easily steal some unfortunate scientist's published hard work (if I was inclined to), rewrite it, and submit my paper in the name of Dr Pisi-Pantz from the School of Heuristic Intelligence & Technology (SHIT) to one of the myriad lesser-known journals. Then I could sit back and wait for offers of a seat on editorial boards to flood-in, before I embark on a globetrotting tour of science conferences and meetings.
ReplyDeleteHmm.. perhaps a new career as a phoney scientist is worthy of consideration..
Rhys, you bas. I almost choked to death on my coffee - Stronzo Bestiale. There's so much Rongness there.
ReplyDelete