Sister blog of Physicists of the Caribbean. Shorter, more focused posts specialising in astronomy and data visualisation.
Friday, 30 June 2017
Conference concert
Conference concert in the Rudolphinium. And guess who got to sit dead centre in the third row from the front ? Me, that's who. I win, bitches.
And with that I withdraw once more for one final day of non-stop astronomy...
[I found out later that this was entirely down to luck, as the seats were given out completely at random]
Sunday, 25 June 2017
EWASS begins
EWASS. the European Week of Astronomy and Space Science, has now commenced. For the next week I'll be fighting off the invading horde of 1100 barbarous astronomers and the week after that will be spent recovering. I'll be online, but intermittently.
http://eas.unige.ch/EWASS2017/about.jsp
http://eas.unige.ch/EWASS2017/about.jsp
In Theory
Alternative title : Ten Times Scientists Didn't Use The Word Theory To Mean A Well-Tested Model That's Almost A Fact Because That's Not What The Damn Thing Means So Just Get Over It Already.
Admittedly, I do keep flip-flopping on whether "theory" means, "incredibly well-tested" model or something else. This post should definitively clear that up by making it abundantly clear than everything is much more complicated than that.
Clearly there are some theories which do extraordinarily well - sometimes so well that theory and fact are indistinguishable. It might be fair to start to describe these as laws, not theories - the law of gravity, the law of evolution. Both of these things are established factual processes. Yet even these are like Russian dolls : within them we find detailed theoretical models of how they occur, and within those we find competing hypotheses as to how particular aspects proceed and even rivals to the theory - but not the laws. Gravity is a thing. Evolution happens. It's the mechanisms by which these things occur that's open to debate (at least a little bit), not their very existence.
Even if we were to insist that hypothesis only means, "explanation with little or no testing" (which it does) and theory only means, "well-tested explanation" (which it doesn't), then it wouldn't be easy to distinguish between the two. No strict criteria of what "well tested" means exists. It's probably impossible anyway, given the incredibly diverse nature of theories. You can't equate cat emotions with the distortion of spacetime around a black hole, or at least you shouldn't.
The reality is, though, that the vast majority of theories fall somewhere between these two extremes. They aren't just speculations based on limited data, and they aren't so convincing that no other explanations are plausible. They've had some testing and they generally work, but they have room for improvement. Some of them might turn out to be completely wrong, others just need tweaking.
I'm all for rigorous definitions wherever that's possible and appropriate. But in the case of "theory" I think that neither is the case. The simple truth of the matter is that science isn't always purely objective. It's a murky, messy business of turning facts into models, testing those models, rejecting some while provisionally tolerating others. Pretending that it's more objective than it actually is won't work, because it simply isn't true. Would it be nice if it was ? Sure ! But that's not what it's like, and that murkiness is sometimes what makes it fun.
No definition will stop the most ardent from bullshitting about science, because these people simply do not care - and you can't argue with someone who doesn't care, you can only have shouting matches. But for the rest, let's not set ourselves up for disaster by pretending we know things we do not. Simply admit the plain truth of it - that we know hardly anything for certain, but we're far, far more confident about some things than others. If this leaves people feeling lost and insecure, then that would be a good start. Perhaps (and I say this cautiously, knowing how damaging bullshit and stupidity can be) then they'd stop the chest-thumping for a moment, begin to realise that not everything can be quantified, and actually learn how to think.
https://astrorhysy.blogspot.com/2017/06/in-theory.html https://astrorhysy.blogspot.com/2017/06/in-theory.html
Admittedly, I do keep flip-flopping on whether "theory" means, "incredibly well-tested" model or something else. This post should definitively clear that up by making it abundantly clear than everything is much more complicated than that.
Clearly there are some theories which do extraordinarily well - sometimes so well that theory and fact are indistinguishable. It might be fair to start to describe these as laws, not theories - the law of gravity, the law of evolution. Both of these things are established factual processes. Yet even these are like Russian dolls : within them we find detailed theoretical models of how they occur, and within those we find competing hypotheses as to how particular aspects proceed and even rivals to the theory - but not the laws. Gravity is a thing. Evolution happens. It's the mechanisms by which these things occur that's open to debate (at least a little bit), not their very existence.
Even if we were to insist that hypothesis only means, "explanation with little or no testing" (which it does) and theory only means, "well-tested explanation" (which it doesn't), then it wouldn't be easy to distinguish between the two. No strict criteria of what "well tested" means exists. It's probably impossible anyway, given the incredibly diverse nature of theories. You can't equate cat emotions with the distortion of spacetime around a black hole, or at least you shouldn't.
The reality is, though, that the vast majority of theories fall somewhere between these two extremes. They aren't just speculations based on limited data, and they aren't so convincing that no other explanations are plausible. They've had some testing and they generally work, but they have room for improvement. Some of them might turn out to be completely wrong, others just need tweaking.
I'm all for rigorous definitions wherever that's possible and appropriate. But in the case of "theory" I think that neither is the case. The simple truth of the matter is that science isn't always purely objective. It's a murky, messy business of turning facts into models, testing those models, rejecting some while provisionally tolerating others. Pretending that it's more objective than it actually is won't work, because it simply isn't true. Would it be nice if it was ? Sure ! But that's not what it's like, and that murkiness is sometimes what makes it fun.
No definition will stop the most ardent from bullshitting about science, because these people simply do not care - and you can't argue with someone who doesn't care, you can only have shouting matches. But for the rest, let's not set ourselves up for disaster by pretending we know things we do not. Simply admit the plain truth of it - that we know hardly anything for certain, but we're far, far more confident about some things than others. If this leaves people feeling lost and insecure, then that would be a good start. Perhaps (and I say this cautiously, knowing how damaging bullshit and stupidity can be) then they'd stop the chest-thumping for a moment, begin to realise that not everything can be quantified, and actually learn how to think.
https://astrorhysy.blogspot.com/2017/06/in-theory.html https://astrorhysy.blogspot.com/2017/06/in-theory.html
Saturday, 17 June 2017
Monday, 12 June 2017
Crowdsourced reviewing
Interesting and novel approach. Via Sakari Maaranen.
I am not proposing what is sometimes referred to as crowdsourced reviewing, in which anyone can comment on an openly posted manuscript. I believe that anonymous feedback is more candid, and that confidential submissions give authors space to decide how to revise and publish their work. I envisioned instead a protected platform whereby many expert reviewers could read and comment on submissions, as well as on fellow reviewers’ comments. This, I reasoned, would lead to faster, more-informed editorial decisions.
I recruited just over 100 highly qualified referees, mostly suggested by our editorial board. We worked with an IT start-up company to create a closed online forum and sought authors’ permission to have their submissions assessed in this way. Conventional peer reviewers evaluated the same manuscripts in parallel. After an editorial decision was made, authors received reports both from the crowd discussion and from the conventional reviewers.
This January, we put up two manuscripts simultaneously and gave the crowd 72 hours to respond. Each paper received dozens of comments that our editors considered informative. Taken together, responses from the crowd showed at least as much attention to fine details, including supporting information outside the main article, as did those from conventional reviewers.
So far, we have tried crowd reviewing with ten manuscripts. In all cases, the response was more than enough to enable a fair and rapid editorial decision. Compared with our control experiments, we found that the crowd was much faster (days versus months), and collectively provided more-comprehensive feedback.
https://www.nature.com/news/crowd-based-peer-review-can-be-good-and-fast-1.22072
I am not proposing what is sometimes referred to as crowdsourced reviewing, in which anyone can comment on an openly posted manuscript. I believe that anonymous feedback is more candid, and that confidential submissions give authors space to decide how to revise and publish their work. I envisioned instead a protected platform whereby many expert reviewers could read and comment on submissions, as well as on fellow reviewers’ comments. This, I reasoned, would lead to faster, more-informed editorial decisions.
I recruited just over 100 highly qualified referees, mostly suggested by our editorial board. We worked with an IT start-up company to create a closed online forum and sought authors’ permission to have their submissions assessed in this way. Conventional peer reviewers evaluated the same manuscripts in parallel. After an editorial decision was made, authors received reports both from the crowd discussion and from the conventional reviewers.
This January, we put up two manuscripts simultaneously and gave the crowd 72 hours to respond. Each paper received dozens of comments that our editors considered informative. Taken together, responses from the crowd showed at least as much attention to fine details, including supporting information outside the main article, as did those from conventional reviewers.
So far, we have tried crowd reviewing with ten manuscripts. In all cases, the response was more than enough to enable a fair and rapid editorial decision. Compared with our control experiments, we found that the crowd was much faster (days versus months), and collectively provided more-comprehensive feedback.
https://www.nature.com/news/crowd-based-peer-review-can-be-good-and-fast-1.22072
Subscribe to:
Posts (Atom)
Giants in the deep
Here's a fun little paper about hunting the gassiest galaxies in the Universe. I have to admit that FAST is delivering some very impres...
-
Of course you can prove a negative. In one sense this can be the easiest thing in the world : your theory predicts something which doesn...
-
Why Philosophy Matters for Science : A Worked Example "Fox News host Chris Wallace pushed Republican presidential candidate to expand...
-
In the last batch of simulations, we dropped a long gas stream into the gravitational potential of a cluster to see if it would get torn...