For my part, I see no evidence whatever of any special bias against any new ideas just because they're new. Instead, a lack of support for an idea generally just means that the evidence for that idea is lousy. That some ideas do occasionally rise to the top simply reflects the changing evidence, while survivorship bias means we quickly forget the hordes of other (often downright crazy) theories that had to be rejected.
It's a similar story with scientific methodologies. Inasmuch as it exists at all, which I rather doubt, the "replication crisis" is at worst due to a poor understanding of statistics and lack of rigour. This is entirely normal scientific practise : we find a problem and learn how to avoid it. That's exactly what we should be doing. If we take a more retroactive view, we'd quickly find ourselves hurling abuse at all previous experiments for not meeting today's improved understanding of how to achieve properly robust results. And yelling at Renaissance thinkers for not putting error bars on their plots just seems a bit, well, pointless to me.
The thing is, the process of refining cutting-edge practises always looks messy because cutting-edge practises are the least understood by definition. That mistakes are made is a completely normal, usually unavoidable part of the process. It's not worth panicking about because there's absolutely no way we can do anything about it, any more than a medieval peasant could have worried about the non-existence of cheap budget holidays or proper sanitation. Yes, mistakes are bad and should be corrected - and yes, sometimes the consequences can be extremely serious. But as the old meme goes :
Finding mistakes in the process is itself part of the scientific process. And guess what ? That means we're actually going to - shock, horror - find out that we've been making mistakes.
Which is, somewhat ironically, why I want to emphasise how much of a non-issue confirmation bias really is. When your very method rests upon criticising itself, paradigm shifts are inevitable. We might get stuck in a rut for a while, but nowhere near as long as if we took our techniques for granted and never bothered to update them at all.
What does this look like at the coalface of research ? Actually, sometimes it can be a pretty unpleasant experience. When you spend months checking a result is valid and exploring all the possible implications of it, write it up in a way that seems perfectly clear to you and your co-authors, only to have a referee decide (sometimes arbitrarily) that it's wrong, the self-correcting nature of the scientific process doesn't feel particularly nice. We often say that scientists like being wrong, and that's true - provisionally. There are definitely circumstances under which it's not much fun at all. Maybe at some point I'll try and generalise when being wrong feels rewarding and when it just plain sucks.
The tricky part is that with messy, front-line research, you never really know where it's going. So it can be hard to tell if the referee is just doing their job (they absolutely should attack things from multiple angles) or being over-zealous. Specifically, my current paper is now on its second referee, after we decided that the first was simply too inconsistent to try and reason with. I've previously said that this is in part due to the journal's lack of instructions as to what it is referee's actually do, but the second referee is also rather negative about the main results.
I'll describe the paper in more detail when it's finally published. The main thrust of it is that we found some short gas tails from galaxies in a region where we expect galaxies to be losing gas. This is hardly an Earth-shattering result. Detecting gas streams is something interesting enough to publish, but not something that would warrant a mass orgy. Finding a very small portion of the gas galaxies have lost isn't going to revolutionise physics. And yet to persuade two independent referees that these claimed detections are likely real is proving to be much more work than merely saying, "look, here's a picture", even though I personally think said picture is pretty damn compelling. Nor is this the first time I (or literally all of my colleagues) have had to try and convince someone of something which seemed to be bleedin' obvious.
For my part, I'm 100% convinced our results are correct. Some parts of the paper will change, but if the referee is a reasonable person then they'll become much more enthusiastic about our main findings on the next revision. We'll see.
What does this mean for the really big issues in contemporary physics - dark matter, dark energy, the standard model, that sort of thing ? It doesn't mean these paradigms are definitely correct, far from it. As I said, the self-criticising nature means these paradigms may well be overturned. Instead, what it means is that these paradigms are not, absolutely 100% certainly NO FUCKING WAY, enforced by confirmation bias. That is simply not a thing. If minor incremental advances are routinely subjected to such robust criticism, then the idea that the major findings have somehow evaded this onslaught is clearly utter bollocks. Trying to convince people of the bloomin' obvious may be (extremely !) annoying to deal with in day-to-day research, but the plus side is that we can be really, really, really, really sure the major findings aren't just because everyone likes to toe the line and hates novelty, or is feathering their own nest with all that lovely grant money or whatever other nonsense someone's on about.
Dark matter is just groupthink ? Total bollocks.
The standard model is full of holes and no-one wants to admit it ? Utter tripe.
Cosmology won't consider any alternatives to the Big Bang because of an obsession with publishing more and more papers ? Absolute rot, the lot of it.
That is not to say that individuals, and even individual institutions, can't be subject to all the usual human frailties that afflict us elsewhere : they can. But the idea that the whole global system enforces paradigms because it doesn't want to change or isn't seeing the problems with the models - that is pure fiction.
The reason paradigms change isn't because people suddenly decide they'd just got everything wrong. Real revolutions don't involve everyone apologising to people they'd once dismissed as cranks, because cranks overwhelmingly tend to be just, well, cranks. No, real change happens because of new evidence and discoveries. These could be new observational or experimental data, or the development of new statistical or mathematical techniques. They can take a while to become widely accepted and adopted, but that too is an unavoidable, desirable part of the process. You wouldn't really want to start off by flying passengers in prototype aircraft, after all.
So while it's absolutely normal to use techniques we expect are imperfect and will be improved, this is fundamentally different from the idea that there is widespread malpractice, i.e. knowingly using methods which are misleading in order to support a given result. I, for one, think that charge has no force to it at all. And while we should always seek to criticise current practise, what we should not do - unless there are quite exceptional circumstances - is pretend that this criticism is anything other than part and parcel of the normal scientific approach. Yes, it's extremely rewarding when this leads to major breakthroughs. The penalty we have to pay is that at the ground level it can often be tedious, boring, and genuinely annoying.
False Consensus
Of all the allegations made against mainstream science, the charge of "false consensus" is the one that's the most worrying. The idea is that we all want to agree with each other for fear of being seen as different, or worse, that we will lose research funding for being too unconventional.