Ten Dark Galaxies That Are Way More Awesome Than Dragonfly 44
.... no, not really. Well maybe not. Thing is, it's complicated.
Dragonfly 44 is this very faint dark matter dominated galaxy that's currently getting a lot of media attention. And rightly so, because it is very strange. At least it might be. But we don't yet know for sure just how strange it is : if it's as massive as has been directly measured, it's not that weird. However if it's as massive as the estimates suggest, then it's really very strange indeed.
As I show in this post, we already knew of a lot of optically very faint objects that are very hard to explain. We don't know how gas gets into the dark matter halos in the first place, and we definitely don't know how it turns into stars. So Dragonfly 44 doesn't break those aspects of galaxy formation theory, because they were already in a pretty bad shape anyway.
But what cosmological models predicted was that there should be a lot more very small galaxies than we'd previously detected. Recent discoveries looked like they were beginning to find them... not in sufficient numbers, but enough to hope that maybe the theory wasn't so bad after all. Dragonfly 44 might throw a spanner in the works, if its mass is as high as the estimate suggest. There was never much of a problem with the theory for galaxies of this mass, it seemed to be in decent agreement with the observations.
So Dragonfly 44, together with a bunch of other observations, might indicate that there are a whole load of giant dark galaxies that were never predicted by the theory. That could be a big problem. But I'd reserve judgement for the moment, for two reasons. Firstly, we don't have many good mass estimate for these new galaxies - and those we do have suggest the majority are dwarves after all. Second, the high mass estimate for Dragonfly 44 is a huge extrapolation based on the known-to-be-flawed numerical simulations.
What's the answer ? I have no idea. It's not the sort of thing that can be answered in a blog post, it's the kind of thing you need many different people to look at for the next several years. Right now we don't even really know the nature of the problem. It could be potentially very exciting, but it's just far too early to tell.
Placeholder post intended to be replaced with a better summary.
Sister blog of Physicists of the Caribbean. Shorter, more focused posts specialising in astronomy and data visualisation.
Saturday, 27 August 2016
Wednesday, 24 August 2016
Those poor defenceless galaxies
This is either very interesting, not interesting at all, or I've made a mistake somewhere...
Three galaxies falling into a cluster and being pummelled by the gravity of 400 other galaxies (not shown). The field of view is the same size in each case, tracking the centre of the galaxy as it falls through the cluster. The orbit of each galaxy should be very similar (though I need to check this to be sure), so each one should be experiencing the same gravitational field from the other galaxies.
On the left we have a small, lightweight galaxy, let's call it galaxy A. It gets fairly heavily disrupted, losing ~50% of its initial mass. Not terribly surprising, actually it did better than I thought it would.
On the middle and right we have a much larger galaxy that was designed to be similar to a specific galaxy. Not all of its properties are very well known though. The middle panel (let's call it B) uses properties similar to that of an earlier simulation on the same object, which has a particularly massive and extended gas disc but not so much dark matter. The one on the right (C) uses more realistic parameters, with a slightly smaller, less massive gas disc but quite a bit more dark matter.
Galaxy C suffers by far the least disruption, losing only around ~10% or so of its gas. Hurrah ! It's something like 40 times more massive than the one on the left, so that's not too surprising.
What is surprising is galaxy B. It's 20 times more massive than galaxy A, but suffers just as badly - if not worse. Half its gas gets ripped off, but when you look at the disc close-up you see it's in far worse shape. So the factor of 20 in mass difference hasn't made much difference, but apparently a further increase in a factor of 2 to galaxy C makes a huge difference. That doesn't make a lot of sense to me. OK, B also has a bit more gas which is a bit more extended, but still...
There's a couple of possibilities. Possibly the difference in mass has put the galaxies on different orbits and B has been unlucky and collided with a particularly massive galaxy. Or, perhaps the difference in rotation speed might be responsible - B might have had a resonant encounter with a galaxy moving past it as roughly the same speed as it's rotating, which maximises the time the other galaxy has to pull on the gas. But neither of these seem like good explanations for why there's such a dramatic difference. Hmmm....
Tuesday, 23 August 2016
Friday, 12 August 2016
Avoiding the need to publish or perish
I'd suggest that the situation can be improved with a combination of changes to journals, publication and CV culture.
Currently the options for publishing are essentially limited to a main journal or Nature or Science, the latter two being accorded greater prestige. Not sure about Science, but Nature is no longer held in particularly high regard by many astronomers. Still, while the idea of assessing the quality of research is largely trying to quantify the unquantifiable, perhaps we can at least try and quantify things in a better way than in the current system.
Research falls into various different categories. Some studies are purely observational catalogues designed for future work - they present and measure data, but say nothing themselves about what they mean. Other papers are the opposite, using no new data collected by the authors themselves but rely purely on other people's previous work. Many are a mix of the two. Some papers which do try and interpret the data do so from a pure observational perspective while others use nothing but analytic or numerical modelling, while a few use both. And then there are these "replication studies" (not sure that's the best term) which deliberately try and see if the previous conclusions stand up to a repeat analysis - usually using new methods or different data rather than literally replicating exactly what the previous team did.
Currently journals do not distinguish these (or other) different sorts of research in any way. A published paper is a published paper, end-of. OK, many journals also publish letters (short, usually slightly more important/urgent findings) as well as main articles, but that's it. A few journals are slightly more suitable for catalogues as opposed to new theories, but there's no strict demarcation as to which journal to publish different sorts of studies in.
But perhaps there should be - or if an entirely new journal is too much, perhaps there should be different divisions within journals. E.g. there's MNRAS and MNRAS Letters, why not also MNRAS Catalogues, MNRAS Modelling, MNRAS New Ideas I Just Thought Up And Would Very Much Appreciate It If Someone Else Could Test For Me, Thanks. In this way it would be easier to look at an author's CV and determine not just how much research they do, but what sort - are they mainly collecting and cataloguing data, thinking up new interpretations, testing previous research, lots of different things, what ? A wise institute will hire people with a diverse range of skills, not just the ones who publish the most papers of any type. And it will hire some of the extremes - people who only do observations, only simulations - as well as from the more usual middle ground.
Labelling the research won't help without a corresponding change in how research is valued, e.g. how much it matters on your CV. All the different sorts of research is valuable, but a finding which has been replicated is much more significant. Far from being the least important, as in, "let's check just to make sure", it should be subjected to the strictest form of peer review. A paper verified by and independent replication study should be held in much higher regard than one which hasn't (of course some findings can't be practically replicated - e.g. no-one's going to repeat a project that took five years to complete, so let's not go nuts with this).
At the same time, stifling novel ideas should be the last thing anyone wants. A good researcher is probably not one whose every paper is verified - that probably means they just haven't had any interesting ideas. You want a mixture, say, 50%. Vigilance in the peer review system would stop people from gaming it, e.g. by deliberately publishing a mixture of mediocre and crackpot research. However, the notion that only verified findings matter needs to be broken. Yes, if a paper repeatedly fails to stand up to scrutiny that line of inquiry should be abandoned - but that doesn't mean the idea wasn't a good one at the time.
Maybe all this will even help with the silly grant systems which are in place that assess projects based on number of papers. If a project produces five papers which contain new ideas but no actual independently replicated findings, maybe that project isn't as good as one which produced three papers with a mixture of observation, theory and interpretation. Or then again maybe we should just end the silly grant system entirely, because it's silly.
https://www.youtube.com/watch?v=42QuXLucH3Q
Currently the options for publishing are essentially limited to a main journal or Nature or Science, the latter two being accorded greater prestige. Not sure about Science, but Nature is no longer held in particularly high regard by many astronomers. Still, while the idea of assessing the quality of research is largely trying to quantify the unquantifiable, perhaps we can at least try and quantify things in a better way than in the current system.
Research falls into various different categories. Some studies are purely observational catalogues designed for future work - they present and measure data, but say nothing themselves about what they mean. Other papers are the opposite, using no new data collected by the authors themselves but rely purely on other people's previous work. Many are a mix of the two. Some papers which do try and interpret the data do so from a pure observational perspective while others use nothing but analytic or numerical modelling, while a few use both. And then there are these "replication studies" (not sure that's the best term) which deliberately try and see if the previous conclusions stand up to a repeat analysis - usually using new methods or different data rather than literally replicating exactly what the previous team did.
Currently journals do not distinguish these (or other) different sorts of research in any way. A published paper is a published paper, end-of. OK, many journals also publish letters (short, usually slightly more important/urgent findings) as well as main articles, but that's it. A few journals are slightly more suitable for catalogues as opposed to new theories, but there's no strict demarcation as to which journal to publish different sorts of studies in.
But perhaps there should be - or if an entirely new journal is too much, perhaps there should be different divisions within journals. E.g. there's MNRAS and MNRAS Letters, why not also MNRAS Catalogues, MNRAS Modelling, MNRAS New Ideas I Just Thought Up And Would Very Much Appreciate It If Someone Else Could Test For Me, Thanks. In this way it would be easier to look at an author's CV and determine not just how much research they do, but what sort - are they mainly collecting and cataloguing data, thinking up new interpretations, testing previous research, lots of different things, what ? A wise institute will hire people with a diverse range of skills, not just the ones who publish the most papers of any type. And it will hire some of the extremes - people who only do observations, only simulations - as well as from the more usual middle ground.
Labelling the research won't help without a corresponding change in how research is valued, e.g. how much it matters on your CV. All the different sorts of research is valuable, but a finding which has been replicated is much more significant. Far from being the least important, as in, "let's check just to make sure", it should be subjected to the strictest form of peer review. A paper verified by and independent replication study should be held in much higher regard than one which hasn't (of course some findings can't be practically replicated - e.g. no-one's going to repeat a project that took five years to complete, so let's not go nuts with this).
At the same time, stifling novel ideas should be the last thing anyone wants. A good researcher is probably not one whose every paper is verified - that probably means they just haven't had any interesting ideas. You want a mixture, say, 50%. Vigilance in the peer review system would stop people from gaming it, e.g. by deliberately publishing a mixture of mediocre and crackpot research. However, the notion that only verified findings matter needs to be broken. Yes, if a paper repeatedly fails to stand up to scrutiny that line of inquiry should be abandoned - but that doesn't mean the idea wasn't a good one at the time.
Maybe all this will even help with the silly grant systems which are in place that assess projects based on number of papers. If a project produces five papers which contain new ideas but no actual independently replicated findings, maybe that project isn't as good as one which produced three papers with a mixture of observation, theory and interpretation. Or then again maybe we should just end the silly grant system entirely, because it's silly.
https://www.youtube.com/watch?v=42QuXLucH3Q
Tuesday, 9 August 2016
My hydrogen is better than your hydrogen
There's hydrogen and then there's hydrogen. But which hydrogen is the best hydrogen ?
In the last few years, the prevailing view has been that it's molecular hydrogen (H2) that's important for star formation, with atomic hydrogen (HI) being a sort of boring sideshow. HI is like the thousands of hapless contestants on The X Factor who have all the singing talent of a diseased cat that's being run over by a lawnmower, whereas H2 is like the incredibly small number who can actually carry a tune. Every H2 molecule originally starts out as two hydrogen atoms, but only when they combine to form the molecule does star formation have any chance of happening. At least, that's the theory.
This paper challenges that in an unexpected way : by looking at galaxies which host gamma ray bursts. GRBs are thought to be the result of massive exploding stars. The authors measure the HI content of the host galaxies for the first time and find that they're somewhat gas rich. Not like, OMFG that's gassier than that bloke in the pub who farted a tune, just a little bit more HI gas than normal. Whereas in general GRB host galaxies are known to have very little H2*. The star formation rates and other properties of these galaxies are also entirely normal. How can they have normal star formation rates, host GRBs and yet not have much in the way of H2 ?
* A major weakness is that they only have H2 (indirect) observations for one galaxy, and that only gives an upper limit - which is not particularly low. Their conclusions would be much stronger if they had observations of these particular galaxies, but for now they have to assume the galaxies are typical of GRB hosts in general.
The authors suggest four possibilities :
- The UV radiation from the massive stars which are forming might dissipate the H2. But that would be unusual, it should only happen very close to the most massive stars - not throughout the entire galaxy.
- There might be more H2 than is detected. This is a possibility I think they don't give enough credit to - H2 can (usually) only be detected indirectly (silent but deadly, if you will), so it's possible there's a lot of undetected H2. Jury's out on that one.
- The HI might be rapidly converted into H2 which is then very, very efficiently converted into stars. That doesn't work so well since the formation time of the H2 is thought to be much longer than the collapse time of the HI.
Which leaves the possibility that in this case it's the HI that's forming the stars directly. And why not ? They have plenty of HI but not much H2 at all. Theoretically this make sense.
H2 is thought to require dust grains on which to form, but GRBs have been shown to occur in very dust-poor regions of the host galaxies where there shouldn't be much H2. And if stars are forming from pure HI, it makes sense that they'd be particularly massive and so lead to GRBs : HI is warmer than H2, and therefore needs more mass to collapse. H2 is cold and so is much more prone to fragmenting - it has less thermal pressure pushing it outwards. Also the observed correlation between star formation rate and gas content is tighter when the HI is included.
One of the really neat things is that one of the host galaxies they study has a massive, optically dark cloud of HI very nearby. It's almost as massive as the galaxy, about the same size (and so density, which is the important thing for star formation) and a similar line width (which may indicate rotation). This is extremely strange and it's surprising they don't comment on this further - this is one of the most massive optically dark HI clouds known, and its line width is respectably high. They interpret it as a signature of cold accretion : primordial gas that's flowing from intergalactic space into the galaxy (though they say higher resolution observations would be nice).
I've never liked this explanation. Why does this appear to be happening only in a handful of galaxies ? Why not around every isolated galaxy ? And why is the part of the HI close to the galaxy so incredibly dense - why isn't there a longer, more diffuse tail ? Why are these clouds usually seen on only one side of the galaxy ? Why isn't it forming stars ? It can't be the product of a tidal encounter because there's nothing nearby to have an encounter with, so what's going on ? Are those my feet ?
One proton, one election - lots of unanswered questions.
http://adsabs.harvard.edu/abs/2015A%26A...582A..78M
In the last few years, the prevailing view has been that it's molecular hydrogen (H2) that's important for star formation, with atomic hydrogen (HI) being a sort of boring sideshow. HI is like the thousands of hapless contestants on The X Factor who have all the singing talent of a diseased cat that's being run over by a lawnmower, whereas H2 is like the incredibly small number who can actually carry a tune. Every H2 molecule originally starts out as two hydrogen atoms, but only when they combine to form the molecule does star formation have any chance of happening. At least, that's the theory.
This paper challenges that in an unexpected way : by looking at galaxies which host gamma ray bursts. GRBs are thought to be the result of massive exploding stars. The authors measure the HI content of the host galaxies for the first time and find that they're somewhat gas rich. Not like, OMFG that's gassier than that bloke in the pub who farted a tune, just a little bit more HI gas than normal. Whereas in general GRB host galaxies are known to have very little H2*. The star formation rates and other properties of these galaxies are also entirely normal. How can they have normal star formation rates, host GRBs and yet not have much in the way of H2 ?
* A major weakness is that they only have H2 (indirect) observations for one galaxy, and that only gives an upper limit - which is not particularly low. Their conclusions would be much stronger if they had observations of these particular galaxies, but for now they have to assume the galaxies are typical of GRB hosts in general.
The authors suggest four possibilities :
- The UV radiation from the massive stars which are forming might dissipate the H2. But that would be unusual, it should only happen very close to the most massive stars - not throughout the entire galaxy.
- There might be more H2 than is detected. This is a possibility I think they don't give enough credit to - H2 can (usually) only be detected indirectly (silent but deadly, if you will), so it's possible there's a lot of undetected H2. Jury's out on that one.
- The HI might be rapidly converted into H2 which is then very, very efficiently converted into stars. That doesn't work so well since the formation time of the H2 is thought to be much longer than the collapse time of the HI.
Which leaves the possibility that in this case it's the HI that's forming the stars directly. And why not ? They have plenty of HI but not much H2 at all. Theoretically this make sense.
H2 is thought to require dust grains on which to form, but GRBs have been shown to occur in very dust-poor regions of the host galaxies where there shouldn't be much H2. And if stars are forming from pure HI, it makes sense that they'd be particularly massive and so lead to GRBs : HI is warmer than H2, and therefore needs more mass to collapse. H2 is cold and so is much more prone to fragmenting - it has less thermal pressure pushing it outwards. Also the observed correlation between star formation rate and gas content is tighter when the HI is included.
One of the really neat things is that one of the host galaxies they study has a massive, optically dark cloud of HI very nearby. It's almost as massive as the galaxy, about the same size (and so density, which is the important thing for star formation) and a similar line width (which may indicate rotation). This is extremely strange and it's surprising they don't comment on this further - this is one of the most massive optically dark HI clouds known, and its line width is respectably high. They interpret it as a signature of cold accretion : primordial gas that's flowing from intergalactic space into the galaxy (though they say higher resolution observations would be nice).
I've never liked this explanation. Why does this appear to be happening only in a handful of galaxies ? Why not around every isolated galaxy ? And why is the part of the HI close to the galaxy so incredibly dense - why isn't there a longer, more diffuse tail ? Why are these clouds usually seen on only one side of the galaxy ? Why isn't it forming stars ? It can't be the product of a tidal encounter because there's nothing nearby to have an encounter with, so what's going on ? Are those my feet ?
One proton, one election - lots of unanswered questions.
http://adsabs.harvard.edu/abs/2015A%26A...582A..78M
Monday, 8 August 2016
Scientists are not latter-day Mayans
Unfortunately the Mayans had used their exquisite astronomical data within a mythological culture of astrology that rested upon false but mathematically sophisticated theories about the Universe. They collected unprecedented amounts of precise astronomical data... but failed to come up with the breakthrough ideas of Nicolaus Copernicus, Galileo Galilei, Johannes Kepler and Isaac Newton.
Popular strategy in funding science currently guides the allocation of most of the Astronomy Division funds at the US National Science Foundation (NSF) to major facilities and large scale surveys. The focus is clearly on large team efforts to collect better data within the mainstream paradigms of Astronomy, under the assumption that good science will follow.
As I (and many others) have written before (e.g. http://astrorhysy.blogspot.cz/2015/10/false-consensus.html), an over-emphasis on big science is dangerous. But you do need big projects to answer some questions. So what you want is a mix of big and small groups. Big groups are better at making very precise measurements, or answering very narrow questions with carefully stated assumptions. Smaller groups and individuals are better at innovative thinking but it's harder for them to reach the same level of precision and accuracy. If you have too much of either, you might be in trouble.
. I noticed this bias from close distance recently while serving on the PhD thesis committee of a student who was supposed to test whether a particular data set from a large cosmological survey is in line with LCDM; when a discrepancy was found, the goal of the thesis shifted to explaining why the data set is biased and incomplete. How can LCDM be ruled out in such a scientific culture? Observers should strive to present their results in a theory-neutral way rather than aim to reinforce the mainstream view.
Well, observers are going to have their own biases just like everyone else. What you want to do is make the data publically available as much as financially possible - ideally at the raw, unprocessed level, but at least at the level of the reduced, human-readable level. But yeah, if someone goes looking to say, "how does this data support my conclusion ?" rather than "what conclusion does this data support ?" then they're not really doing science at all.
Given the strong sociological trends in the current funding climate of team efforts, how could we reduce the risk of replicating the indoctrinated Mayan astronomy? The answer is simple: by funding multiple approaches to analyzing data and multiple motivations to collecting new data. After all, the standard model of cosmology is merely a precise account of our ignorance: we do not understand the nature of inflation, the nature of dark matter or dark energy. Our model has difficulties accounting for what we see in galaxies (attributed often to complicated “baryonic physics”), while at the same time not being able to see directly what we can easily calculate (dark matter and dark energy). The only way to figure out if we are on the wrong path is to encourage competing interpretations of the known data.
Funding agencies should promote the analysis of data for serendipitous (nonprogrammatic)purposes. When science funding is tight, a special effort should be made to advance not only the mainstream dogma but also its alternatives. To avoid stagnation and nurture a vibrant scientific culture, a research frontier should always maintain at least two ways of interpreting data so that new experiments will aim to select the correct one. A healthy dialogue between different points of view should be fostered through conferences that discuss conceptual issues and not just experimental results and phenomenology, as often is the case currently. These are all simple, off-the-shelf remedies to avoid the scientific misfortune of the otherwise admirable Mayan civilization.
Not sure I'd describe the Mayan's quite so favourably... the difficulty, of course, is promoting a dialogue between points of view which are genuinely controversial (is galaxy formation all due to mergers or something else ?) and those which have already been well and truly refuted (does the Earth really orbit the Sun ?). Too much open-mindedness is as bad as too little (http://astrorhysy.blogspot.cz/2015/10/not-so-open.html). Then again, I'm generally happy with the state of things as they are. In particular I see a lot of senior professors who are all too happy to consider radically different alternatives (dark matter doesn't exist ! it's all baryons ! the universe isn't really expanding !). Conferences which are largely limited to discussing incremental experimental results are in the minority, in my experience.
Disclaimer : Avi is a co-I on my VLA proposal to observe dark HI clouds. We postulate three different models to test, but as I've said before, observations normally tell you something completely different from what you expected. Can't imagine any halfway-decent observer who would say, "these observations don't support my ideas, therefore they must be wrong", although see also http://astrorhysy.blogspot.cz/2016/05/nemesis.html
http://arxiv.org/abs/1608.01731
Popular strategy in funding science currently guides the allocation of most of the Astronomy Division funds at the US National Science Foundation (NSF) to major facilities and large scale surveys. The focus is clearly on large team efforts to collect better data within the mainstream paradigms of Astronomy, under the assumption that good science will follow.
As I (and many others) have written before (e.g. http://astrorhysy.blogspot.cz/2015/10/false-consensus.html), an over-emphasis on big science is dangerous. But you do need big projects to answer some questions. So what you want is a mix of big and small groups. Big groups are better at making very precise measurements, or answering very narrow questions with carefully stated assumptions. Smaller groups and individuals are better at innovative thinking but it's harder for them to reach the same level of precision and accuracy. If you have too much of either, you might be in trouble.
. I noticed this bias from close distance recently while serving on the PhD thesis committee of a student who was supposed to test whether a particular data set from a large cosmological survey is in line with LCDM; when a discrepancy was found, the goal of the thesis shifted to explaining why the data set is biased and incomplete. How can LCDM be ruled out in such a scientific culture? Observers should strive to present their results in a theory-neutral way rather than aim to reinforce the mainstream view.
Well, observers are going to have their own biases just like everyone else. What you want to do is make the data publically available as much as financially possible - ideally at the raw, unprocessed level, but at least at the level of the reduced, human-readable level. But yeah, if someone goes looking to say, "how does this data support my conclusion ?" rather than "what conclusion does this data support ?" then they're not really doing science at all.
Given the strong sociological trends in the current funding climate of team efforts, how could we reduce the risk of replicating the indoctrinated Mayan astronomy? The answer is simple: by funding multiple approaches to analyzing data and multiple motivations to collecting new data. After all, the standard model of cosmology is merely a precise account of our ignorance: we do not understand the nature of inflation, the nature of dark matter or dark energy. Our model has difficulties accounting for what we see in galaxies (attributed often to complicated “baryonic physics”), while at the same time not being able to see directly what we can easily calculate (dark matter and dark energy). The only way to figure out if we are on the wrong path is to encourage competing interpretations of the known data.
Funding agencies should promote the analysis of data for serendipitous (nonprogrammatic)purposes. When science funding is tight, a special effort should be made to advance not only the mainstream dogma but also its alternatives. To avoid stagnation and nurture a vibrant scientific culture, a research frontier should always maintain at least two ways of interpreting data so that new experiments will aim to select the correct one. A healthy dialogue between different points of view should be fostered through conferences that discuss conceptual issues and not just experimental results and phenomenology, as often is the case currently. These are all simple, off-the-shelf remedies to avoid the scientific misfortune of the otherwise admirable Mayan civilization.
Not sure I'd describe the Mayan's quite so favourably... the difficulty, of course, is promoting a dialogue between points of view which are genuinely controversial (is galaxy formation all due to mergers or something else ?) and those which have already been well and truly refuted (does the Earth really orbit the Sun ?). Too much open-mindedness is as bad as too little (http://astrorhysy.blogspot.cz/2015/10/not-so-open.html). Then again, I'm generally happy with the state of things as they are. In particular I see a lot of senior professors who are all too happy to consider radically different alternatives (dark matter doesn't exist ! it's all baryons ! the universe isn't really expanding !). Conferences which are largely limited to discussing incremental experimental results are in the minority, in my experience.
Disclaimer : Avi is a co-I on my VLA proposal to observe dark HI clouds. We postulate three different models to test, but as I've said before, observations normally tell you something completely different from what you expected. Can't imagine any halfway-decent observer who would say, "these observations don't support my ideas, therefore they must be wrong", although see also http://astrorhysy.blogspot.cz/2016/05/nemesis.html
http://arxiv.org/abs/1608.01731
Saturday, 6 August 2016
Death of the Flying Snakes
In the last batch of simulations, we dropped a long gas stream into the gravitational potential of a cluster to see if it would get torn apart and produce features resembling those seen in reality : small, optically dark hydrogen clouds without stars. It didn't. But it did turn the stream into some remarkably snake-like structures, which was nice.
An obvious and (relatively) easy to fix problem with this is that we just ignored the parent galaxy where the gas came from in the first place. We showed that this probably wouldn't make any difference, but of course it's better to test this. This way we can model not only what happens to the gas streams but how they form in the first place, as well as what happens to the galaxy.
It's just possible that if the initial structure of the streams produced is very different to what we assumed in the previous model, we might get a very different outcome. Also, if this does produce the dark hydrogen clouds we're after, we need to know if the galaxy still looks like a galaxy - if it gets smashed into something unlike anything observed, that would falsify the model. And probably the most important feature is the stars : with stars included in the galaxy we can now test if the clouds produced would really be optically dark or not. Previously we could only measure the properties of the gas and just assumed there wouldn't be any stars present.
A preliminary analysis shows that, as predicted, including the galaxy makes no difference for the dark clouds : they still form but they're still very, very rare - and that's without checking if they'd be optically dark or not, so I expect them to be even rarer in this model. The galaxies get bashed around a bit, but about half of them suffer no noticeable effects at all. The other half have some nice structures induced in them for a while, but nothing outlandish. They don't even lose a measurable amount of gas. And, alas, there are no more snakes...
People have been suggesting that dark clouds are some form of "tidal debris" for years, but there are only two other papers explicitly showing this is a possibility. Unfortunately those results have been over-interpreted. While large dark gas streams can be produced, small ones with properties like some of those observed are almost impossible to produce in this way.
There's still a fair bit of work left to do with these purely gravitational models. This model uses a galaxy with properties typical of a spiral galaxy, whereas the previous claims were based on one using a less massive spiral with a more gas that was more extended. So we need to do run a direct comparison to see if and how that influences the results. We also need to measure the stellar content of the stripped gas. Previous models said this was very low, I suspect ours will show it to be a bit higher. Still, it's very clear that the pure tidal debris scenario just doesn't work.
What is very much harder to predict is what would happen if we included the hot gas in the intracluster medium. In principle this is easy to do, in practise not so much. More physics, more computationally expensive. The original "flying snakes" took just a few days of computation time. These galaxies - which have more components and more stars - took about three weeks. Adding in the surrounding gas could make things take several times longer again. We'll get there eventually, but not today.
Wednesday, 3 August 2016
Science outreach is not elitism
The World Music Festival, Womad, hosted a science pavilion this year. It's the latest attempt to reach non-scientific audiences by bridging the gap with the arts. But are such initiatives successful ? The pavilion's inauguration follows criticism by some, such as science writer Simon Singh, of the cost and effectiveness of some public science engagement.
From the embedded link :
During his talk, Dr Singh, author of seven books on sciences and maths, said that such a project’s value for money should be compared with the cost of a science teacher.
Mmm, possibly. The difference is that teachers are a long-term solution whereas outreach events like this are generally aimed at adults. Since the tax-paying adults aren't going to go back to school, both are needed to maintain a public interest and engagement in science.
Also from the link :
Dr Singh criticised a number of projects, including a 2005 ballet inspired by the theory of relativity that was launched to celebrate the centenary of Albert Einstein’s most seminal breakthroughs. “People hate physics, they hate ballet; all you’ve done is allowed people to hate things more efficiently,” he told the 2:AM Amsterdam conference about alternative metrics on 7 October.
I'm not a ballet fan but I can't see any reason to provoke ballet enthusiasts, it's not as if they've done anything to me. I rather like the idea of combining science and the arts, for obvious reasons.
"There are a lot of intellectually curious people here, probably not coming to learn about science, but it's a great way of talking to them," says Prof Jones. "Many of them are tax payers who fund what we do and it's important that they understand what their taxes are delivering." "What we do is help people bridge that gap themselves by stimulating them," says Mr Large. "The trick is communication. Music is about communicating emotion. Science is about discovering facts, but if you can't communicate them there is little point in discovering them."
I also think that having science in unexpected places reinforces the often-overlooked fact that the entire freakin' modern world is utterly dependent on scientific discoveries. You don't have to ram this down people's throats, but making it easy for people to go to a science outreach event who wouldn't otherwise do so (a.k.a. "nudge" theory) just sounds like a thoroughly sensible idea to me.
Perhaps the best attended event is the Q and A with Steven Moffat on the science and sc-fi of Doctor Who, with the audience overflowing onto the grass outside the pavilion. Despite this, Moffat, who is the BBC series' head writer and executive producer, says he knows nothing about science.
Ironically, the latest seasons of Doctor Who have had far more of a sci-fi leaning than previously, albeit a certain type of sci-fi. Lots of explorations of "what if ?" concepts, which is at least as essential to sci-fi as the nitty-gritty details of how the spaceships are supposed to work. If not more so : the social impact of technologies and discoveries is often what makes them interesting, not necessarily the science itself.
"Putting science alongside music is the correct and proper way to apprehend science," he tells the BBC. "It's not a separate thing. They're not for different kinds of people. They're for exactly the same kind of people."
Damn straight.
http://www.bbc.com/news/science-environment-36943937
From the embedded link :
During his talk, Dr Singh, author of seven books on sciences and maths, said that such a project’s value for money should be compared with the cost of a science teacher.
Mmm, possibly. The difference is that teachers are a long-term solution whereas outreach events like this are generally aimed at adults. Since the tax-paying adults aren't going to go back to school, both are needed to maintain a public interest and engagement in science.
Also from the link :
Dr Singh criticised a number of projects, including a 2005 ballet inspired by the theory of relativity that was launched to celebrate the centenary of Albert Einstein’s most seminal breakthroughs. “People hate physics, they hate ballet; all you’ve done is allowed people to hate things more efficiently,” he told the 2:AM Amsterdam conference about alternative metrics on 7 October.
I'm not a ballet fan but I can't see any reason to provoke ballet enthusiasts, it's not as if they've done anything to me. I rather like the idea of combining science and the arts, for obvious reasons.
"There are a lot of intellectually curious people here, probably not coming to learn about science, but it's a great way of talking to them," says Prof Jones. "Many of them are tax payers who fund what we do and it's important that they understand what their taxes are delivering." "What we do is help people bridge that gap themselves by stimulating them," says Mr Large. "The trick is communication. Music is about communicating emotion. Science is about discovering facts, but if you can't communicate them there is little point in discovering them."
I also think that having science in unexpected places reinforces the often-overlooked fact that the entire freakin' modern world is utterly dependent on scientific discoveries. You don't have to ram this down people's throats, but making it easy for people to go to a science outreach event who wouldn't otherwise do so (a.k.a. "nudge" theory) just sounds like a thoroughly sensible idea to me.
Perhaps the best attended event is the Q and A with Steven Moffat on the science and sc-fi of Doctor Who, with the audience overflowing onto the grass outside the pavilion. Despite this, Moffat, who is the BBC series' head writer and executive producer, says he knows nothing about science.
Ironically, the latest seasons of Doctor Who have had far more of a sci-fi leaning than previously, albeit a certain type of sci-fi. Lots of explorations of "what if ?" concepts, which is at least as essential to sci-fi as the nitty-gritty details of how the spaceships are supposed to work. If not more so : the social impact of technologies and discoveries is often what makes them interesting, not necessarily the science itself.
"Putting science alongside music is the correct and proper way to apprehend science," he tells the BBC. "It's not a separate thing. They're not for different kinds of people. They're for exactly the same kind of people."
Damn straight.
http://www.bbc.com/news/science-environment-36943937
Modern science hasn't fallen from a golden age
Many, many good and provocative things in this.
There hasn’t been a major success in theoretical physics in the last few decades after the standard model, somehow.
That should be "breakthrough", not "success". There have been a great many successes, not least of which are the Higgs Boson and gravitational waves. But are they breakthroughs ? Yes and no. Yes because GW's will help us reveal more about the universe than we could otherwise learn, but no because we haven't yet found the breaking point of the standard model. Theories have been validated, they have success - but that's not always the same as making progress.
If Einstein had gone to school to learn what science is, if he was any one of my colleagues today who are looking for a solution of the big problem of physics today, what would he do? He would say, “OK, the empirical content is the strong part of the theory. The idea in classical mechanics that velocity is relative: forget about it. The Maxwell equations: forget about them. The theories themselves have to be changed, OK? What we keep solid is the data, and we modify the theory so that it makes sense coherently, and coherently with the data.”
To an extent. He would be aware that any new theory had to give results at least as good as the old one where applicable, and/or make predictions in new areas. He would seek something that approximated to the old theory under certain conditions. He certainly wouldn't forget about the old idea or its predictions, but he'd be happy to abandon the old conceptual basis of the model.
That’s not at all what Einstein does. Einstein does the contrary. He takes the theories very seriously. He says, “Look, classical mechanics is so successful that when it says that velocity is relative, we should take it seriously, and we should believe it. And the Maxwell equations are so successful that we should believe the Maxwell equations.” He has so much trust in the theory itself, in the qualitative content of the theory—that qualitative content that Kuhn says changes all the time, that we learned not to take too seriously—and he has so much in that that he’s ready to do what? To force coherence between the two theories by challenging something completely different, which is something that’s in our head, which is how we think about time.
Well, I'm not sure about that. Certainly he believes Maxwell's equations, because they demonstrably work. But Maxwell's underlying concept was some very strange notion about vortices (http://www.clerkmaxwellfoundation.org/DysonFreemanArticle.pdf) which I seem to recall no-one took very seriously. So I am not at all convinced there is such a strong difference between the way Einstein thought and the way modern physicists behave.
Every physicist today is immediately ready to say, “OK, all of our past knowledge about the world is wrong. Let’s randomly pick some new idea.”
No, that's the way of the pseudoscientist, not actual scientists. This is the first time I've ever heard anyone accuse mainstream scientists of being too innovative !
But it’s absurd when everybody jumps and says, “OK, Einstein was wrong,” just because a little anomaly indicates this. It never works like that in science.
Yes - but this is what the media hype is all about. It is absolutely not the case in real science. I'm amazed to hear an actual scientist suggest this is what happens, because it doesn't.
Science is not about certainty. Science is about finding the most reliable way of thinking at the present level of knowledge. Science is extremely reliable; it’s not certain. In fact, not only is it not certain, but it’s the lack of certainty that grounds it. Scientific ideas are credible not because they are sure but because they’re the ones that have survived all the possible past critiques, and they’re the most credible because they were put on the table for everybody’s criticism.
The very expression “scientifically proven” is a contradiction in terms. There’s nothing that is scientifically proven.... If we’ve learned that the Earth is not flat, there will be no theory in the future in which the Earth is flat. If we have learned that the Earth is not at the center of the universe, that’s forever. We’re not going to go back on this. If you’ve learned that simultaneity is relative, with Einstein, we’re not going back to absolute simultaneity, like many people think.... I seem to be saying two things that contradict each other.
That's because you are. You cannot say, "we're not going to go back on this" and, "nothing is ever proven with certainty". The two ideas are mutually exclusive. Some things - a few, rare things - are known with what we should approximate to certainty. The Earth isn't flat - the only way that could ever be the case is if the Universe was all a simulation or run by a capricious deity. Those ideas are possible, but they aren't science. You can't do science without assuming an objective, measurable reality. But this level of certainty is a rare thing indeed, and of course that's not what science is largely about.
The question is, Why can't we live happily together and why can’t people pray to their gods and study the universe without this continual clash? This continual clash is a little unavoidable, for the opposite reason from the one often presented. It’s unavoidable not because science pretends to know the answers. It’s the other way around, because scientific thinking is a constant reminder to us that we don’t know the answers. In religious thinking, this is often unacceptable.
Only if you take the stereotype of religious thinking, and actually science does claim to know things, or at least it knows them well enough to rule out some claims. Earth created in six days ? Nope. That didn't happen, end-of. Entire Universe run be a supernatural deity ? Totally unprovable and well beyond the remit of science.
The scientists who say “I don't care about philosophy” —it’s not true that they don’t care about philosophy, because they have a philosophy. They’re using a philosophy of science. They’re applying a methodology. They have a head full of ideas about what philosophy they’re using; they’re just not aware of them and they take them for granted, as if this were obvious and clear, when it’s far from obvious and clear. They’re taking a position without knowing that there are many other possibilities around that might work much better and might be more interesting for them.
On this I completely agree.
https://newrepublic.com/article/118655/theoretical-phyisicist-explains-why-science-not-about-certainty?
There hasn’t been a major success in theoretical physics in the last few decades after the standard model, somehow.
That should be "breakthrough", not "success". There have been a great many successes, not least of which are the Higgs Boson and gravitational waves. But are they breakthroughs ? Yes and no. Yes because GW's will help us reveal more about the universe than we could otherwise learn, but no because we haven't yet found the breaking point of the standard model. Theories have been validated, they have success - but that's not always the same as making progress.
If Einstein had gone to school to learn what science is, if he was any one of my colleagues today who are looking for a solution of the big problem of physics today, what would he do? He would say, “OK, the empirical content is the strong part of the theory. The idea in classical mechanics that velocity is relative: forget about it. The Maxwell equations: forget about them. The theories themselves have to be changed, OK? What we keep solid is the data, and we modify the theory so that it makes sense coherently, and coherently with the data.”
To an extent. He would be aware that any new theory had to give results at least as good as the old one where applicable, and/or make predictions in new areas. He would seek something that approximated to the old theory under certain conditions. He certainly wouldn't forget about the old idea or its predictions, but he'd be happy to abandon the old conceptual basis of the model.
That’s not at all what Einstein does. Einstein does the contrary. He takes the theories very seriously. He says, “Look, classical mechanics is so successful that when it says that velocity is relative, we should take it seriously, and we should believe it. And the Maxwell equations are so successful that we should believe the Maxwell equations.” He has so much trust in the theory itself, in the qualitative content of the theory—that qualitative content that Kuhn says changes all the time, that we learned not to take too seriously—and he has so much in that that he’s ready to do what? To force coherence between the two theories by challenging something completely different, which is something that’s in our head, which is how we think about time.
Well, I'm not sure about that. Certainly he believes Maxwell's equations, because they demonstrably work. But Maxwell's underlying concept was some very strange notion about vortices (http://www.clerkmaxwellfoundation.org/DysonFreemanArticle.pdf) which I seem to recall no-one took very seriously. So I am not at all convinced there is such a strong difference between the way Einstein thought and the way modern physicists behave.
Every physicist today is immediately ready to say, “OK, all of our past knowledge about the world is wrong. Let’s randomly pick some new idea.”
No, that's the way of the pseudoscientist, not actual scientists. This is the first time I've ever heard anyone accuse mainstream scientists of being too innovative !
But it’s absurd when everybody jumps and says, “OK, Einstein was wrong,” just because a little anomaly indicates this. It never works like that in science.
Yes - but this is what the media hype is all about. It is absolutely not the case in real science. I'm amazed to hear an actual scientist suggest this is what happens, because it doesn't.
Science is not about certainty. Science is about finding the most reliable way of thinking at the present level of knowledge. Science is extremely reliable; it’s not certain. In fact, not only is it not certain, but it’s the lack of certainty that grounds it. Scientific ideas are credible not because they are sure but because they’re the ones that have survived all the possible past critiques, and they’re the most credible because they were put on the table for everybody’s criticism.
The very expression “scientifically proven” is a contradiction in terms. There’s nothing that is scientifically proven.... If we’ve learned that the Earth is not flat, there will be no theory in the future in which the Earth is flat. If we have learned that the Earth is not at the center of the universe, that’s forever. We’re not going to go back on this. If you’ve learned that simultaneity is relative, with Einstein, we’re not going back to absolute simultaneity, like many people think.... I seem to be saying two things that contradict each other.
That's because you are. You cannot say, "we're not going to go back on this" and, "nothing is ever proven with certainty". The two ideas are mutually exclusive. Some things - a few, rare things - are known with what we should approximate to certainty. The Earth isn't flat - the only way that could ever be the case is if the Universe was all a simulation or run by a capricious deity. Those ideas are possible, but they aren't science. You can't do science without assuming an objective, measurable reality. But this level of certainty is a rare thing indeed, and of course that's not what science is largely about.
The question is, Why can't we live happily together and why can’t people pray to their gods and study the universe without this continual clash? This continual clash is a little unavoidable, for the opposite reason from the one often presented. It’s unavoidable not because science pretends to know the answers. It’s the other way around, because scientific thinking is a constant reminder to us that we don’t know the answers. In religious thinking, this is often unacceptable.
Only if you take the stereotype of religious thinking, and actually science does claim to know things, or at least it knows them well enough to rule out some claims. Earth created in six days ? Nope. That didn't happen, end-of. Entire Universe run be a supernatural deity ? Totally unprovable and well beyond the remit of science.
The scientists who say “I don't care about philosophy” —it’s not true that they don’t care about philosophy, because they have a philosophy. They’re using a philosophy of science. They’re applying a methodology. They have a head full of ideas about what philosophy they’re using; they’re just not aware of them and they take them for granted, as if this were obvious and clear, when it’s far from obvious and clear. They’re taking a position without knowing that there are many other possibilities around that might work much better and might be more interesting for them.
On this I completely agree.
https://newrepublic.com/article/118655/theoretical-phyisicist-explains-why-science-not-about-certainty?
Tuesday, 2 August 2016
Simulations are always more fun when they go wrong...
Simulations are always more fun when they go wrong. This one was supposed to be a galaxy that falls into a cluster and gets harassed. Somehow, the code decided to completely ignore the dark matter and star particle files so it was just a fast-rotating disc of gas which explodes. Then it gradually falls back into the cluster where the other galaxies turn it into this vast, complex, three-dimensional structure.
How this happened I have absolutely no idea. It shouldn't be possible, because all the files are copied from a master directory, so there's no way the dark matter file should be empty. But it is anyway. Even removing the dark matter shouldn't be enough to make it explode - the outer parts should fly off, but the inner part should remain bound. But it didn't. Oh well.
If insanity is trying to do the same thing twice and expecting a different result, then I guess I must be insane. I re-ran it and got a much more normal galaxy than just gets a little bit disturbed and definitely doesn't explode, which is exactly what was supposed to happen and happens to every single other run. It shall remain forever a minor mystery, but it's fun to watch.
Monday, 1 August 2016
VLA proposal submitted !
VLA proposal submitted after a week frantically re-writing last year's version because I thought the deadline was 1st September, not 1st August. 16 hours of observing time and we should be finally able to determine if these dark hydrogen clouds are dark galaxies or just weird bits of fluff. Hopefully adding more authors and the two more papers published on this since last year will convince NRAO that this extremely modest amount of time is worth it. Second time lucky ! Time to relax by watching Robot Wars.
Subscribe to:
Posts (Atom)
Giants in the deep
Here's a fun little paper about hunting the gassiest galaxies in the Universe. I have to admit that FAST is delivering some very impres...
-
Of course you can prove a negative. In one sense this can be the easiest thing in the world : your theory predicts something which doesn...
-
Why Philosophy Matters for Science : A Worked Example "Fox News host Chris Wallace pushed Republican presidential candidate to expand...
-
In the last batch of simulations, we dropped a long gas stream into the gravitational potential of a cluster to see if it would get torn...