Why are scientific papers so damn boring ? Many reasons. The article focuses heavily on acronyms, which are a problem, but in my opinion this is just the thin end of a very large wedge.
1 million acronyms is a staggering number, but what surprised Doubleday even more was the fact that only 0.2 percent of those abbreviations were used regularly (meaning that they appeared at least 10,000 times) and 79 percent were used fewer than 10 times. “Not only are we creating more acronyms over time,” she says, “but we’re not even reusing them.”
Paradoxically, while scientists are not reusing new acronyms, they’re creating new definitions for acronyms that already exist. In an article about clarity in scientific writing, published in the Journal of the American Society of Echocardiography, Alan Pearlman says: “A psychiatrist knows that MS stands for ‘mental status,’ while a neurologist might take it to mean ‘multiple sclerosis.’ I am interested in valvular heart disease and am certain that MS stands for ‘mitral stenosis.’ My cardiac pharmacologist reminds me, however, that MS really stands for ‘morphine sulfate,’ while my neighbor, who works in the computer industry, tells me that it stands for ‘Microsoft.’”
Figuring out when an acronym is appropriate can be tricky, says Barnett. Generally, he says most scientists now think it should be used only when the terms are unambiguous, common in the field, not easily replaced by simpler language, and if the words are too long or complex to be consistently written out. Think of “HR” for “heart rate,” says Barnett—they’re both two syllables, it’s easy to spell out, and it’s a very simple concept that most people are familiar with.
Good advice. I've semi-affectionately given ALMA the nickname of the Amazingly Large Manufacturer of Acronyms, since it generates them at a truly frightening rate. It also has the habit of using this bizarre term "block" for everything : scheduling block, observing block, execution block... it's weird. If you were to say, "my observing campaign suffered the failure of an execution block because the SB had the wrong intent, so the DRM told me that the P2G would be looking into fixing a bug with the SPW setup so that the AOT will spot problems in FDM before running into this during QA2", no-one would think anything amiss.
Yes, some jargon is actually beneficial, and a good deal of it is unavoidable, but much of it is just plain useless.
Some of it, of course, is easily addressed. I remember being tremendously perplexed by the term "path length" when learning about wave interference back in high school. Somehow my brain got stuck and for the longest time it just didn't occur to me to take the term literally : the length of the path travelled. I was expecting jargon where in fact there wasn't any ! Some stuff you've just gotta know, and you can't expect a scientific paper to be fully accessible to a general audience. That's fair enough.
... but you can expect it to be accessible to a specialist audience. Acronyms are only a small part of why this isn't always the case. To be honest, these days I find reading most extragalactic astronomy papers relatively easy-going, but even then only within a very narrow sub-discipline, and it's taken me a long time to get to this point. When I first started (and still today if I venture too far outside my comfort zone) I found few things as tedious as slogging through what are often some of the driest, most lifeless pieces of text that some poor sod has ever had to endure writing. It's not really the acronyms so much as the whole style - or more often lack thereof - and format of the whole thing.
What's the real problem ? There's no one single cause, but the biggest contributing factor, in my opinion, is a lack of clear narrative flow. Any good piece of text ought to flow linearly, one section leading naturally to the next. In a paper this is sometimes not the case at all, with each section being almost entirely independent. What's particularly maddening is that there isn't a universal approach to this. Some papers should be read from start to finish, with each section being essential for the next, whereas others can and should be read according to whatever section the reader is most interested in (otherwise the reader will bore themselves unnecessarily). But even those of the most modular structure will often reference parts of other sections, so the reader has to move through the text like the world's worst Choose Your Own Adventure book. To say nothing of the notorious paper chase, when you just want a single parameter but end up following a whole chain of citations through a dozen different papers that ends up with an unpublished proceedings, or worse, cites back to the original paper in a closed loop...
Now of course, it just isn't sensible to insist on a linear narrative structure all the time. Sometimes that just wouldn't be appropriate. The problem is that while some papers do have entirely self-contained sub-sections, others don't. This means the only way to be certain you're not missing some vital caveat is to read the whole damn thing. And I know from direct experience that referees often prefer papers to avoid repetition (i.e. implicitly insisting on a linear structure even when this isn't a good idea, never mind that repetition is a very useful tool for increasing understanding), making it a risky business indeed to take any statement in a paper out of context.
The point is that there's no agreed-upon format. This makes reading a paper far more of a challenge than it has any real need to be. And that goes a long way to explaining why papers are so incredibly dull - even the basic structural format is ill-defined. It's like trying to build a house without having decided if you want to end up with a three-bedroom semi or an igloo.
The second major problem I'd highlight is the supposed need to prioritise clarity to an absurd and counter-productive extreme. Any kind of flair or rhetoric tends to be exorcised as though it were an evil spirit haunting the text with its terrible screams of Trying To Make Things Lively For The Reader. Again the inhomogeneity of standards here is extremely frustrating : a few authors seem to get away with colourful remarks and even jokes (!) but most are absolutely reduced to the barest facts. And reducing things to the extent that they're unreadably dull would seem to defeat the whole purpose of publishing something that's intended to be read !
Similarly, when I've tried to introduce some pedagogical remarks for the less uber-specialist reader, on occasion I've been told to remove them because they're well-known. It's very hard to gauge what is indeed generally well-known and what's just well-known to the reviewer - a problem inherent with using highly expert referees. The deeper you are in a field, the harder it is for you to judge what's confusing to someone less entrenched. And again, there's no agreed-upon standard as to who the primary target audience should be, or what level of knowledge they're expected to have.
There isn't really any one obvious solution to this. Some options are technological, even mundane. A decade or two ago it was standard practise to write all mathematical symbols in this downright cryptic swirly italic font, such that the letter "M" looked like the result of a Victorian robot getting into a fight with an angry snake. Thankfully this has largely died out, although it's still common to use really weird, unpronounceable Greek* letters as variables when perfectly decent English letters would do just fine**. Using a bunch of unreadable squiggles is presumably only intended to make the thing harken to an earlier era of arcane wizardly and occult mysticism, because it certainly doesn't help anyone follow what the equations are supposed to actually mean.
* Of course, if you happen to be Greek, you probably take a different view. That English is the language of academia is historical happenstance, but there's no reason learning Greek letters should be a key part of scientific practise.
** There are some standardised exceptions, of course. I'm referring here to customised variables not likely to be used outside a single paper, which have no need to be obfuscated so.
One possibility might be to alter the physical format of papers, to better reflect the fact that nobody reads the original hardcopy any more; making it standard practise to use internal links to relevant sections, along with some way to quickly return to the original point at which the reader jumped section. Embedded dictionary entries, as can be found on e-book readers, would also be a great benefit, allowing the novice reader to very quickly see a one-paragraph summary of a common term with citations to the major relevant papers. And some facility to instruct the reader on whether a paper was intended to be linear or not would go a long way to making the whole process easier. There is value in insisting the reader experience the complete paper, and there is also value in sacrificing efficiency to ensure a more complete understanding, but there are a lot of cases where this is downright cruelty.
On a purely practical note, we should do away with LaTeX. It's just not a natural way of writing a document and forces authors - especially beginners - to spend more time figuring out how to produce an article than they do on writing comprehensible text. There's no energy left over for anything else.
Thinking further ahead, some AI-based way to use the internal links so that the paper could be customised for the individual reader would be even better. Say, if a paper catalogues a hundred different properties of a sample but you're only interested in two or three, then being able to reduce all the description to only those parameters would make everyone's life a lot more pleasant.
Likewise, having dedicated, specialist journals of scientific methods might help. If an author could simply describe their sample and then say, "we analysed this using the procedure of Horace et al. 1995", where Horace et al. describe nothing but the methodology, that would reduce the paper size and leave the details accessible to the enthusiast. Such a methodology journal would have to be especially rigorous and detailed (almost to the point of describing which buttons to press), ensuring that the reader could reproduce the method exactly, but this would have the benefit of preventing the author from omitting crucial details. There are certain criteria that could easily be standardised and insisting on them should be automatic : integration time, number of particles, etc, which a methodology paper could then state were essential. Hence even a novice reviewer could see at a glance if the authors had included them, since they'd have an easy checklist to follow.
Similarly, where the authors are claiming to have produced some new procedure or recipe, or determined some parameter value, it should be absolutely essential that they state this with the utmost clarity, not bury it deep within the text. It's something of a paradox that insisting on clarity has tended to lead to so much obfuscation; linear narratives, on the other hand, tend to lead more naturally to descriptions of what the reader needs to know. The challenge for the reviewer in that case is to avoid having the wool pulled over their eyes.
Which leads finally to just how damn boring most papers are. Frankly they have all the joie de vivre as a whale carcass. I get that it can be a nerve-wracking experience trying to deliver a lively oral presentation, and similarly writing is a skill like any other, but papers - which the author has ample preparation time in a controlled environment - do not have to be this bad. This is the hardest issue to address, as it's hard to know who to blame. Do authors write dull papers because they can't find a way to say anything interesting* that will get past the reviewers, or simply out of expectation that that's how papers should be written ? Do reviewers want more lively text ? Do they tend to be such experts that they no longer care about writing style ? It's hard to say.
* Or rather, in an interesting way. The content is another topic entirely - here I'm only concerned with style and readability, not scientific merit.
I'm not advocating that papers be full of random jokes about vampire hamsters. I just think the general level of formality is absurdly extreme to the point that clarity is reduced because no-one actually wants to read anything.
One possibility would be to always have two reviewers : one expert (e.g. someone with a decade or more experience in the field), who would comment on nothing but the scientific content, and one novice (e.g. a PhD student), who would comment on nothing but the style. The expert would then not have to worry about petty typesetting and minor spelling mistakes, but they'd also be forbidden from saying much about the clarity and/or brevity of the explanations. That would be the job of the novice, who, conversely, would not get to say much about the content.
In principle the downside of this is that it increases the workload. But it would have the positive side-benefits of getting beginners in the field involved in the whole process right from the start, in developing the ability to say "I don't understand this" (which many people, myself included, don't like to do, just in case someone has the temerity to try and explain things...), and the work involved in reviewing style is a lot less than that in reviewing content.
An alternative might be to have a structured review form that clearly sets out what the reviewer should do, asking them to think whether terminology really would be clear to a beginner. Whilst a few papers may need to be written with the expectation that only other tenured professors are capable of understanding them, this should be far the exception rather than the norm. Having an objective checklist to check understandability wouldn't be as effective as using someone who actually lacks knowledge, but it's better than letting an ultra-specialist run wild and free decide what's comprehensible for themselves. Getting people to stop and think, "Is this jargon ? Could this be misinterpreted ? Could I use something simpler without sacrificing rigour ?" would at least help.
Rant over. Science is hard, but that's no reason a simple sentence has to be constructed with such malevolent exactitude as to cause unwitting novice observers, bent on seeking a greater understanding of the particular contents of the implicitly aforementioned scholarly work through a dedicated period of detailed scrutiny and examination, a series of progressively increasing difficulties in their apprehension of the implicitly aforementioned scholarly work that can, in the most serious instances, lead to headaches and other common mental conditions that severely restrict their intended goal of comprehending the piece before them, ultimately leading to disillusionment and dissatisfaction with the entire academic edifice.
See what I mean ?
A group of researchers in Australia, including a marine ecologist and a statistician, looked at almost 25 million scientific articles between 1950 and 2019, searching for trends in acronym use. They published their analysis in eLife.