This is a very nice, readable article challenging the myth that Einstein never called the cosmological constant his biggest blunder. In brief, the popular story goes (and still dominates today) that Einstein introduced this as an otherwise-unjustified term in his equations : he preferred a static universe, which the equations didn't allow. Later, on Hubble's discovery (simplifying) that the Universe was actually expanding, he changed his mind and regretted missing out on an amazing prediction that observations would have proven, famously using the expression that it was his "biggest blunder".
The now rather popular skeptical position is that actually he never used that term, and may not even have regarded it as such an important failure (https://www.theatlantic.com/technology/archive/2013/08/einstein-likely-never-said-one-of-his-most-oft-quoted-phrases/278508/). The quote is incredibly widely reported, so if nothing else it'd be nice to know if he really used it or not. The skeptical argument rests on there being only one source for the original quote : George Gamow, who wrote it down with a condescending sneer that Einstein was old and befuddled.
This article challenges this quite strongly, finding two other, independent sources of the quote. That alone makes it far more plausible. Less important for exact history, but much more interesting for the context, are their examinations of Gamow's character and the intent of his remark that "of course the old man agrees with anything nowaday". Gamow, they say, was underrated as a physicist precisely because of his mischevous humour, and they say this remark could be interpreted as Gamow actually being self-deprecating - essentially saying that Einstein's agreement with Gamow's theory wasn't the great honour one would normally assume it to be. A sort of backhanded insult, but more jovial and without the viciousness evident by reading the quote without context.
The most interesting part for me was a slightly tangential look at Gamow as a science populariser and how this wasn't seen as the desirable activity it's generally regarded as today. Paying tributed to Gamow, Wolfgang Yourgrau remarked :
Gamow committed an unforgivable sin. He wrote popular books on physics, biology and cosmology. Moreover, the books were bestsellers because they enabled the uninitiated not only to understand scientific discoveries and theories, but also to understand the human, often humorous facets of the researching men engaged in all of these mysterious ventures…. Most scientists do not fancy the oversimplifying, popularizing of our science…it is tantamount to a cheapening of the sacred rituals of our profession… many of us considered him washed up, a has-been, an intemperate member of our holy order.
http://adsabs.harvard.edu/abs/2018arXiv180406768O
Sister blog of Physicists of the Caribbean. Shorter, more focused posts specialising in astronomy and data visualisation.
Tuesday, 24 April 2018
How could a galaxy lose all its dark matter ?
This paper attempts to simulate the formation of that galaxy lacking in dark matter. It's somewhat taken for granted that while you can fake the appearance of additional dark matter through gravitational encounters - streaming motions along the line of sight are hard to distinguish from rotation, making it look as though a galaxy is rotating very quickly when actually it's just being disrupted. The opposite case, where a galaxy is made to appear deficient in dark matter but actually still has a lot of it, is probably not possible. So the authors try and get a tidal encounter to really remove as much dark matter from their target galaxy as possible.
As we all know, if you do this suddenly and completely, a galaxy will explode. But that requires magic. If you remove it gradually and leave a bit in the centre, there's no reason a galaxy can't survive quite happily. Oh, it'll be more vulnerable to more encounters in the future, but if it's left well alone it'll be fine.
Under the implicit assumption that this particularly weird object legitimately requires a weird progenitor, they therefore start with a target galaxy of unusually, but not exceptionally, low concentration. This means its dark matter halo is unusually extended and therefore easier to remove as it orbits around a more massive galactic thief. No doubt someone will calculate exactly how improbable this is and argue that the chances of detecting a galaxy that has a 1% chance of existing are a million to one, such are the dodgy statistics that seem to be in vogue in astronomy.
Anyway, their target galaxy has stars and dark matter, whereas the burglar galaxy (my term) is a purely analytic, fixed potential. This is a reasonable way to begin, although eventually I'd hope they add in a particle model for the second galaxy as well as including gas and star formation. Gas, in particular, could change things dramatically because it's collisional, but it's reasonable to speculate the target galaxy might have already depleted its gas supply. Using particles for the burglar's halo could also be important, since dynamical friction can increase the chance of a merger.
The authors find that indeed large amounts of dark matter can be stripped by this simple tidal process of close encounters, and the galaxy still survives. This is an important point, but what the paper does not yet adequately demonstrate (it's only submitted, not accepted) is how well the results compare to observations. NGC1052-DF2 is interesting because (criticism notwithstanding) it seems to have little or no dark matter not just in its outer halo, but everywhere. The authors say they reproduce the object's mass halo, but don't give a figure to demonstrate this or quantifiably compare the velocity dispersion of the stars in the simulations with the observations. Without this, the result that the most unbound dark matter can be removed is neither surprising nor novel. It's a decent beginning but the main claim hasn't been made very convincingly yet.
https://arxiv.org/abs/1804.06421
As we all know, if you do this suddenly and completely, a galaxy will explode. But that requires magic. If you remove it gradually and leave a bit in the centre, there's no reason a galaxy can't survive quite happily. Oh, it'll be more vulnerable to more encounters in the future, but if it's left well alone it'll be fine.
Under the implicit assumption that this particularly weird object legitimately requires a weird progenitor, they therefore start with a target galaxy of unusually, but not exceptionally, low concentration. This means its dark matter halo is unusually extended and therefore easier to remove as it orbits around a more massive galactic thief. No doubt someone will calculate exactly how improbable this is and argue that the chances of detecting a galaxy that has a 1% chance of existing are a million to one, such are the dodgy statistics that seem to be in vogue in astronomy.
Anyway, their target galaxy has stars and dark matter, whereas the burglar galaxy (my term) is a purely analytic, fixed potential. This is a reasonable way to begin, although eventually I'd hope they add in a particle model for the second galaxy as well as including gas and star formation. Gas, in particular, could change things dramatically because it's collisional, but it's reasonable to speculate the target galaxy might have already depleted its gas supply. Using particles for the burglar's halo could also be important, since dynamical friction can increase the chance of a merger.
The authors find that indeed large amounts of dark matter can be stripped by this simple tidal process of close encounters, and the galaxy still survives. This is an important point, but what the paper does not yet adequately demonstrate (it's only submitted, not accepted) is how well the results compare to observations. NGC1052-DF2 is interesting because (criticism notwithstanding) it seems to have little or no dark matter not just in its outer halo, but everywhere. The authors say they reproduce the object's mass halo, but don't give a figure to demonstrate this or quantifiably compare the velocity dispersion of the stars in the simulations with the observations. Without this, the result that the most unbound dark matter can be removed is neither surprising nor novel. It's a decent beginning but the main claim hasn't been made very convincingly yet.
https://arxiv.org/abs/1804.06421
Friday, 20 April 2018
Paper re-submitted with a slightly modified title...
Apparently the first one wasn't informative enough. So now we've gone with "Optically dark HI clouds in the Virgo cluster : will no-one rid me of this turbulent sphere ?"
We've kept the acknowledgement to Henry II though. I shall be thoroughly irked if they ask us to take that out.
We've kept the acknowledgement to Henry II though. I shall be thoroughly irked if they ask us to take that out.
Thursday, 12 April 2018
The life of a scientist explained in a simple flow chart
Actually, British universities exist to facilitate the drinking of tea. Any research that gets done is considered a bonus, or in extreme cases, detrimental. 9:30am - start off with a cuppa. That'll last you through to official tea time at 10:30, which can easily be extended until lunchtime, wherein you'll need another cup of tea to prepare you for the afternoon. Fortunately that could take so long that you end up in official afternoon tea, which will probably last until it's time to go home, full of delicious tea and with none of that pesky "research" having slowed you down.
Sunday, 8 April 2018
A grant lottery
Seems like a clear case of "let's try it and see" to me.
Implicit in this proposal is the idea that it isn’t possible to rank applications reliably. If a lottery approach meant we ended up funding weak research and denying funds to excellent project, this would clearly be a bad thing. But research rankings by committee and/or peer review is notoriously unreliable, and it is hard to compare proposals that span a range of disciplines. Many people feel that funding is already a lottery, albeit an unintentional one, because the same grant that succeeds in one round may be rejected in the next. Interviews are problematic because they mean that a major decision – fund or not – is decided on the basis of a short sample of a candidate’s behaviour, and that people with great proposals but poor social skills may be turned down in favour of glib individuals who can sell themselves more effectively.
My view is that there are advantages for the lottery approach over and above the resource issues. First, Avin’s analysis concludes that reliance on peer review leads to a bias against risk-taking, which can mean that novelty and creativity are discouraged. Second, once a proposal was in the pool, there would be no scope for bias against researchers in terms of gender or race – something that can be a particular concern when interviews are used to assess. Third, the impact on the science community is also worth considering. Far less grief would be engendered by a grant rejection if you knew it was that you were unlucky, rather than that you were judged to be wanting. Furthermore, as noted by Marina Papoutsi, some institutions evaluate their staff in terms of how much grant income they bring in – a process that ignores the strong element of chance that already affects funding decisions. A lottery approach, where the randomness is explicit, would put paid to such practices.
http://deevybee.blogspot.com/2018/04/should-research-funding-be-allocated-at.html
Implicit in this proposal is the idea that it isn’t possible to rank applications reliably. If a lottery approach meant we ended up funding weak research and denying funds to excellent project, this would clearly be a bad thing. But research rankings by committee and/or peer review is notoriously unreliable, and it is hard to compare proposals that span a range of disciplines. Many people feel that funding is already a lottery, albeit an unintentional one, because the same grant that succeeds in one round may be rejected in the next. Interviews are problematic because they mean that a major decision – fund or not – is decided on the basis of a short sample of a candidate’s behaviour, and that people with great proposals but poor social skills may be turned down in favour of glib individuals who can sell themselves more effectively.
My view is that there are advantages for the lottery approach over and above the resource issues. First, Avin’s analysis concludes that reliance on peer review leads to a bias against risk-taking, which can mean that novelty and creativity are discouraged. Second, once a proposal was in the pool, there would be no scope for bias against researchers in terms of gender or race – something that can be a particular concern when interviews are used to assess. Third, the impact on the science community is also worth considering. Far less grief would be engendered by a grant rejection if you knew it was that you were unlucky, rather than that you were judged to be wanting. Furthermore, as noted by Marina Papoutsi, some institutions evaluate their staff in terms of how much grant income they bring in – a process that ignores the strong element of chance that already affects funding decisions. A lottery approach, where the randomness is explicit, would put paid to such practices.
http://deevybee.blogspot.com/2018/04/should-research-funding-be-allocated-at.html
Subscribe to:
Posts (Atom)
Giants in the deep
Here's a fun little paper about hunting the gassiest galaxies in the Universe. I have to admit that FAST is delivering some very impres...
-
Of course you can prove a negative. In one sense this can be the easiest thing in the world : your theory predicts something which doesn...
-
Why Philosophy Matters for Science : A Worked Example "Fox News host Chris Wallace pushed Republican presidential candidate to expand...
-
In the last batch of simulations, we dropped a long gas stream into the gravitational potential of a cluster to see if it would get torn...