Investigating the radial acceleration relation in early-type galaxies using the Jeans analysis
The first author lives in the office across the corridor from me, but I didn't know he was working on this. I put this one under the 'My' Astronomy Articles collection only (as usual) to offer my own commentary.
The MDAR, also called the RAR, MDR and various others, is a relation between the predicted and observed acceleration of material in a galaxy. The prediction is based on only the observed material - assuming that all the material we see is all that's there, and that Newtonian gravity is correct. But since many other observations indicate the presence of additional dark matter, we don't expect this to agree with the actual measured MDAR. Naively, we might expect there to be not much of any relation at all, since dark matter is so much more dominant than normal matter according to those other observations.
Strangely, the actual MDAR is a very tight relation with almost no scatter ! It seems that you can use the baryons to predict exactly how much dark matter should be there, which doesn't make a lot of sense according to the standard model... at least, not obviously. In fact that naive interpretation turns out to be wrong. There are a whole bunch of selection effects at work as to which dark matter halos actually host galaxies, and standard models have had no difficulty whatsoever in reproducing the observed MDAR (I have a detailed write-up here : http://astrorhysy.blogspot.cz/2017/08/this-isnt-law-youre-looking-for.html).
(I completely disagree with the authors of this study that "The tightness of the RAR for the LTGs [spirals] remains unexplained with the DM hypothesis.")
Still, the MDAR was actually predicted decades ago by MOdified Newtonian Dynamics, which does away with the need for dark matter, so that's interesting. But there were also observations indicating that the MDAR breaks down in certain dwarf galaxies, potentially a challenge for MOND : acceleration is acceleration, it shouldn't work differently in different places (this is a gross oversimplification since MOND is bloody complicated, but it gets the point across).
In this Proceedings article (i.e. not a peer-reviewed paper), the authors show that the MDAR doesn't work for elliptical galaxies either. And they're not just poxy little dwarf galaxies where the data might not be very reliable, they're stonking great objects where the measurement should be safe enough (inasmuch as anything ever is in astronomy). Interestingly, the deviation looks a lot like the deviation that was already known for dwarf galaxies, which the standard model explains very well.
It gets even more fun. Two thirds of ellipticals don't follow the standard MDAR. The ones that do appear to be much more similar to disc galaxies, dominated by rotation (I'd guess these are actually lenticulars, not ellipticals). So disc galaxies follow the MDAR, but galaxies dominated by random motions don't. That could be a really serious challenge for MOND, as the authors note that such deviant galaxies would "need copious amounts of dark matter in their outer regions even in the MOND approach". What exactly they mean by "copious", they don't say. MOND does require a little bit of dark matter in clusters, but only by a factor ~2-3 compared to tens for the standard model. The authors note :
It is possible to reconcile MOND with our results by supposing additional invisible matter in the galaxies. MOND is already known to require some DM in galaxy clusters. The most discussed candidates are sterile neutrinos (Angus 2007) and compact baryonic objects (Milgrom 2008). We note that the DM required in our galaxies might be connected with the yet undetected gas which is predicted to flow into galaxies to explain various observations (Sancisi et al. 2008).
Michal is very pro-MOND for some reason, though to his credit he acknowledges the possibility that these results disprove it in the abstract. Since this Proceedings reports on a conference which took place last month, it misses the very recent result of the discovery of much of that undetected gas, which is now known not to reside in galaxies but in much larger filaments :
https://plus.google.com/u/0/+RhysTaylorRhysy/posts/FZXbAWmoHgY
It would be interesting to see what the standard model predicts for elliptical galaxies. Thus far, AFAIK no-one's tried to simulate that because the MDAR was measured for spiral galaxies. But it feels like MOND is becoming ever-more desperate : with the caveat that "copious" is undefined here, if it requires as much dark matter as the standard model then I really don't see the point of it at all any more. On the other hand, MOND is also still lacking a good numerical simulation to show what it really predicts.
https://arxiv.org/abs/1711.06335
Sister blog of Physicists of the Caribbean. Shorter, more focused posts specialising in astronomy and data visualisation.
Monday, 20 November 2017
Yes, there's still a 'issing satellites problem'
A critical challenge to the cold dark matter (CDM) paradigm is that there are fewer satellites observed around the Milky Way than found in simulations of dark matter substructure. We show that there is a match between the observed satellite counts corrected by the detection efficiency of the Sloan Digital Sky Survey (for luminosities L≳ 340 L⊙) and the number of luminous satellites predicted by CDM, assuming an empirical relation between stellar mass and halo mass. The "issing satellites problem", cast in terms of number counts, is thus solved, and imply that luminous satellites inhabit subhalos as small as 10^7−10^8 M⊙.
I do like the "issing satellites problem". But I'm very skeptical of this claim.
Although there are hints that the luminous satellite distribution is anisotropic [6, 13, 16–18], we assume it is sufficiently isotropic and spherical to be separable.
No, there's no "hint", it just is. Look, you can see it for yourself :
There it is, big as life. You can't just pretend it might not exist. And corrections for the incompleteness of surveys and the zone of avoidance (where the Milky Way blocks the view) have already been made and found to be minor. Also, while I think the work of Kroupa, Ibata, Pawlowski et al. is just wrong in many regards for the claims of other satellite planes around other galaxies, not citing them at all doesn't seem right.
Star formation in low-mass halos has been demonstrated to be suppressed by reionization and feedback. The discovery of many new dwarfs below the luminosity limit of the classical dwarfs have also closed the gap, as has the understanding that completeness corrections for the new dwarfs are large. In this Letter, we show that such corrections imply that the number of satellite galaxies that inhabit the Milky Way is consistent with the number of luminous satellites predicted by CDM.
Yes, but this has been known since Simon & Geha 2007. You can solve the problem, with enough complexity. The difficult part is not showing that any solution exists (which is what I think they've done here), but that any particular solution is the correct one. You need to show that the parameters required for the explanation are the ones which are actually true in reality.
https://arxiv.org/abs/1711.06267
I do like the "issing satellites problem". But I'm very skeptical of this claim.
Although there are hints that the luminous satellite distribution is anisotropic [6, 13, 16–18], we assume it is sufficiently isotropic and spherical to be separable.
No, there's no "hint", it just is. Look, you can see it for yourself :
There it is, big as life. You can't just pretend it might not exist. And corrections for the incompleteness of surveys and the zone of avoidance (where the Milky Way blocks the view) have already been made and found to be minor. Also, while I think the work of Kroupa, Ibata, Pawlowski et al. is just wrong in many regards for the claims of other satellite planes around other galaxies, not citing them at all doesn't seem right.
Star formation in low-mass halos has been demonstrated to be suppressed by reionization and feedback. The discovery of many new dwarfs below the luminosity limit of the classical dwarfs have also closed the gap, as has the understanding that completeness corrections for the new dwarfs are large. In this Letter, we show that such corrections imply that the number of satellite galaxies that inhabit the Milky Way is consistent with the number of luminous satellites predicted by CDM.
Yes, but this has been known since Simon & Geha 2007. You can solve the problem, with enough complexity. The difficult part is not showing that any solution exists (which is what I think they've done here), but that any particular solution is the correct one. You need to show that the parameters required for the explanation are the ones which are actually true in reality.
https://arxiv.org/abs/1711.06267
Friday, 17 November 2017
Lecture 4/4 : All Hope Abandon, Ye Who Enter Here
Lecture 4/4 : All Hope Abandon, Ye Who Enter Here
And so at last my first foray in lecturing draws to a close. This time I look at problems in galaxy evolution and how they might be solved. The missing satellite problem is just one aspect in the much greater missing galaxy problem. After nearly 20 years, simulations are finally getting the number of predicted galaxies to match the observations... but they're so complicated, this doesn't necessarily mean they've got the answer right. Planes of satellite galaxies, whilst generating papers with hundreds of citations, I show are just pure nonsense and absolutely nothing to worry about, but if they're real then they're not the dark matter killer they're claimed to be. Ultra diffuse galaxies, dark galaxies, the Tully-Fisher relation, the mass discrepancy acceleration relation, and a showdown between MOND and CDM : it's all in here.
Regular blog rants about Jeremy Corbyn and Plato can now resume unhindered. I know y'all dying for that. :P
Though I'll probably be extracting parts of this to make shorter, more outreachy posts for those who don't want to wade through everything here.
This post is a placeholder. I will add a better summary here in due course.
And so at last my first foray in lecturing draws to a close. This time I look at problems in galaxy evolution and how they might be solved. The missing satellite problem is just one aspect in the much greater missing galaxy problem. After nearly 20 years, simulations are finally getting the number of predicted galaxies to match the observations... but they're so complicated, this doesn't necessarily mean they've got the answer right. Planes of satellite galaxies, whilst generating papers with hundreds of citations, I show are just pure nonsense and absolutely nothing to worry about, but if they're real then they're not the dark matter killer they're claimed to be. Ultra diffuse galaxies, dark galaxies, the Tully-Fisher relation, the mass discrepancy acceleration relation, and a showdown between MOND and CDM : it's all in here.
Regular blog rants about Jeremy Corbyn and Plato can now resume unhindered. I know y'all dying for that. :P
Though I'll probably be extracting parts of this to make shorter, more outreachy posts for those who don't want to wade through everything here.
This post is a placeholder. I will add a better summary here in due course.
Monday, 13 November 2017
Lecture 3/4 : Be Careful What You Wish For
This is the third part of my super-shortened course on galaxy evolution. You can find the complete transcript of the 90 minute lecture here, or you can stay with this post for the 9 minute version (if not less).
In galaxy studies we have very limited data and can't control our test subjects. Instead, we have to rely on restricted statistical data and numerical models, so it's crucial to understand the limitations of those models. Essentially this post is about why we interpret the data in the way we do, and why getting the right answer just isn't good enough. And there'll be some galaxy evolutionary theory thrown in as well, just for good measure.
The missing satellite problem
Simulations have gone from simple gravity models of the 1940's to the all-singing all-dancing models of today, from using a few tens of particles to a few billion (or more). Nowadays they can include full hydrodynamics, heating and cooling of the gas affected by radiation, heat conduction and chemistry, magnetic fields, and basically be as sophisticated as hell. Their major limit is that a lot of parameters can't be set from observational measurements - we have to guess them. More on that later.
It's good scientific practise to KISS : Keep It Simple, Stupid. Don't dive in to the really sophisticated models - begin with something much simpler, gradually increasing the complexity so you understand what each new factor is doing. For instance, the Millennium Simulation was a vast, 10 billion particle model of the evolution of the dark matter in the Universe. It contained absolutely nothing except collisionless but gravitationally-interacting dark matter particles. On the large scale this works really well :
Almost every dark matter halo in the simulation contains potentially detectable normal (baryonic) matter. The bottom line is that these kinds of semi-analytic models predict about ten times as many dwarf galaxies around the Milky Way as we actually observe. Since the dark matter is much more massive than the baryons, adding them in shouldn't be able to change the result very much - or at least that's the naive interpretation. First, we need to understand a bit more about the simulations themselves.
Simulations are not magical
I used to think that because you know all the physics at work in a simulation, you automatically understand whatever it does. Yet while you do get full 3D information with complete time evolution, you rarely get a full understanding of what's happening. For start, simulations have restrictions just as observations do. Their resolution is limited (e.g. by the number of particles and computational power), they don't include all the physical processes at work (because some are hard to simulate, while others are just not fully understood), and what we decide to simulate in the first place is heavily influenced by observations - which have their own problems. So they are, necessarily, simplified. It's important to try and convert our numerical predictions into something we can directly and fairly compare with observations.
The above example simply coloured the simulation particle data so that it looked like the original observations, but much more sophisticated approaches are possible. Creating synthetic observations adds even more complexity : you need, for instance, to model how the gas causes absorption and scattering of the light emitted from the stars, to replace your simulated generic gas particles with multiple gas phases, and a host of other factors besides.
While stars are generally simulated as n-bodies (point mass particles which have gravity but nothing else), the gas is more complex. There are two main ways of dealing with the hydrodynamic effects :
1) Smoothed particle hydrodynamics
In SPH codes the gas is modelled as a collection of particles. As well as having mass, each particle is deemed to be part of a kernel with its surrounding neighbours, over which the hydrodynamic equations can be solved. This then accounts for the variation in density, temperature, and pressure. In effect the particle data is transformed into something more like a continuous fluid.
With the kernels set to contain a fixed number of particles, the resolution of the simulation is adaptive : there are more computations where there are many interactions and less where there are fewer. And you can trace the history of each particle and find out where it originated. SPH suffers where there are sharp boundaries between different fluids though - it has difficulty reproducing observed structures.
2) Grid codes
Another approach is to do away with particles completely. Instead, a finite volume of space can be modelled as a grid of cells, each of which containing some fluid with density, temperature, pressure and velocity. Thus it models how gas can flow from cell to cell.
Cell sizes can vary so the resolution can be adaptive. Grid codes are much better at modelling hydrodynamic structures, but tend to be computationally expensive and there's no way of knowing where gas in any particular cell originated. So despite knowing all the initial conditions, there are fundamental restrictions on what you can learn from simulations.
Handle razors carefully !
Simulations, and especially the comparisons to observations, are complex beasts. Clearly there's some virtue in keeping things as simple as possible, but even here we have to be careful. There's a popular notion - spread by Jodie Foster in the film Contact - that Occam's Razor says that the simplest explanation tends to be right one. Occam, however, said no such thing. He said something more like, "entities must not be multiplied beyond necessity" - in essence, prefer simple explanations.
There are good reasons for this, but they have nothing to do with any kind of fundamental truth. Indeed, in science we should never presume to know how the world works : start thinking that the simplest explanation is usually correct and you rapidly degenerate into "a wizard did it". The Universe is a bloody complicated place, and sometimes it needs complex explanations.
John Von Neuman is reported to have said that with three free parameters he could fit an elephant, and with four he could make him wiggle his trunk. The more complex your explanation, the more you can adjust it to make it fit the observed data. Simpler explanations are much harder to fudge and therefore easier to test. But that absolutely does not mean that you never add complexity, because it's equally possible to over-simplify and miss some vital physical process.
In the case of the missing satellite problem, the complexity of the baryonic physics that we're missing from the pure dark matter simulations may have nothing to do with changing the halo structures at all. Instead it might be an example of a much more subtle selection effect.
Selection effects : correlation doesn't equal causation
We know the mass of the baryons is too small to affect the dark matter in our simulations. But we also know that the baryons are the only thing we can observe directly. So perhaps our simulations are missing some mechanism that restricts the presence of the baryons on only certain dark halos : maybe the rest do exist, but remain invisible. It's worth a brief digression here to show how important selection effects can be, and why statistical measurements can be woefully misleading.
The above chart comes from the fantastic website Spurious Correlations. This correlation is statistically significant, but physically meaningless. For a start, it's not at all clear which is the independent (controlling) variable : does excessive cheese consumption drive people insane and make them become entangled in their bedsheets, or do people commiserate bedsheet-based deaths by eating more cheese for some reason ? Both interpretations are equally absurd and the data says precisely nothing about which way round it goes.
Charts like those in Spurious Correlations are a variety of what's known as p-hacking : plot everything against everything and see what sticks. Surprisingly tight correlations can occur by chance if you plot enough variables together : what you're not being shown are the many variables which have no correlation whatsoever. Simply put, if something has a million to one chance of happening, if you give it a million opportunities to happen then it probably will. Other unexpected relations can occur because of common underlying factors with no direct connection between the two plotted variables.
Last time I mentioned different procedures for measuring the size of a galaxy, and we saw that despite being objective they gave very different results. As with automatic galaxy-finding algorithms that produce catalogues of low reliability, the point is that an objective procedure is not the same as being objectively correct. We'll see an example of this shortly and much more in lecture 4.
Unknown unknowns
So the observations of baryons may be severely limiting our view of the Universe. The naive expectation that adding in the baryons can't change the distribution of satellite galaxies predicted in the simulations may be over-simplifying : we might be witnessing a selection effect. Though it must be said that it isn't at all obvious as to the precise mechanism, it is at least conceivable that baryonic physics could limit which halos actually host visible galaxies.
Recently there's been some very interesting discoveries suggesting that that is indeed the case. While galaxies of especially low surface brightness have been known for ages, no-one thought they were numerically significant. That changed in 2015 with the discovery of 800 so-called ultra diffuse galaxies (UDGs) in the Coma cluster, galaxies which are about as large as the Milky Way but as much as a thousand times fainter.
UDGs have since been discovered in all kinds of environments, even in isolation. Most appear to be smooth and red but some are blue and structured, resembling standard LTGs but much fainter. Some are even known to have gas. At the moment, because UDGs are hard to identify, we can't say in which environment they're most common. More problematically, we can't quantify their typical dark matter content. If they're low mass, then UDGs at least alleviate (but do not solve) the missing satellite problem. But if they're massive, then they make things worse. It's of course possible that some are massive and some are not, but the important value is their typical mass, and that we just don't know at all.
If it disagrees with experiment, it'swrong annoying
You could be forgiven for thinking that there are enough problems with the standard model that we should just chuck it out and start again. There are indeed problems, but if we let every difficulty count as a falsification then every theory will have to be discarded. The point is that all our models have been over-simplifications, and without the full physics included we actually can't say if they're wrong or not : maybe they have fundamental problems, maybe they don't.
As mentioned, rare events happen by chance if given enough opportunities. We see this particularly in HI spectra, where the non-Gaussian nature of the noise means we sometimes see very convincing signals indeed that turn out to be spurious. My favourite example of all, though, is a simulation of these spectacular interacting galaxies :
A pretty good match - not perfect, but good. The problem is that this model of the galaxy's formation only included the disturbed galaxy and the elliptical, but subsequent observations found this :
A much larger third galaxy is clearly involved, but that wasn't included in the model. So the model has got the right answer - even in terms of quite fine structural details - by the wrong method ! Getting the right answer is a necessary but not sufficient condition for a good theory. The success of one model does not preclude the success of others.
Don't be hasty
We could turn to our models and say, "hmm, these all have problems, let's chuck them all out and start again", but this would be the wrong lesson to learn. A better lesson would be that if they have problems they need to be modified and improved : we must always be cautious. Only when we find a really deep flaw in the most fundamental nature of a model should we completely reject it.
While the standard dark matter model does have problems, it's also important to remember that it has tremendous successes as well. As well as reproducing the large-scale structure of the Universe, it also works extremely well at explaining colliding galaxy clusters. The Bullet Cluster and other cases show what happens after two clusters collide. Remember a cluster is a gravitationally bound structure containing its own dark matter, gas, and galaxies. If two clusters pass through each other, we'd expect the galaxies (bound by the collisioness dark matter of their parent clusters) to keep going but the gas to get stuck in the middle because of its large volume. That's exactly what happens, and, importantly, gravitational lensing confirms that the dark matter does exactly what it's supposed to. It's very hard to make this work without dark matter.
It's possible that we will eventually find a flaw with the standard model so fundamental that it can't be saved. But it's also possible that there are other, more nuanced aspects of physics we don't understand that could explain the problems without such a drastic rejection. For example, we know the main processes driving galaxy evolution today were very different in the past. Star formation activity peaked around 10 Gyr ago, as did AGN activity, and merger rates have also been decreasing. In the early Universe there were no galaxy clusters, so the processes of tidal encounters and ram pressure stripping would have been very different. There were also population III stars, far more energetic than any we see today, ionising large parts of the Universe ("squelching").
There are two main theories as to how galaxies assembled themselves. One is the idea of monolithic collapse, that huge rotating gas clouds simply collapsed and went thwoooop to form a galaxy. Simulations show that this is very successful. The problem is that there's no evidence or reason to suppose that such monoliths ever existed. Physics instead points to the now-dominant paradigm of hierarchical merging, where galaxies assemble themselves through the cannibalistic merging of smaller galaxies. This has plenty of problems besides the missing satellite issue, but is a far more natural expectation based on our understanding of physics.
Not even wrong ?
I'll finish with some final statistical lessons that we'll need for the concluding post. As mentioned, objective procedures are not necessarily objectively correct. A really wonderful example of this is provided by the datasaurus project. In the gif below, at every frame of the animation the positions of the points have exactly the same mean and standard deviation of their positions !
What this means is that quantification can sometimes of of limited help. You can't quantify "dinosauriness" by a simple parameter like mean position : you have to look at the data yourself. You can, should, and indeed must make statistical quantifications for your analysis. But you also have to look at the damn data, because while using quantitative parameters is fine, relying on them exclusively is an absolutely dreadful idea.
This isn't an abstract mathematical idea either. The US air force suffered from the flaw of averages when they designed their fighter aircraft based on the average dimensions of their pilots, not realising that very few pilots indeed were close to the averages in all parameters : everyone really is unique just like everyone else. This meant a loss of planes and pilots because in a fighter plane it's really, really important you can reach the controls when you need to. The solution ? Instead of tailoring each plane to each pilot, they developed adjustable seats so pilots could set things for themselves no matter which plane they flew.
That's my philosophy of science rant over. The main take-home lessons are :
In galaxy studies we have very limited data and can't control our test subjects. Instead, we have to rely on restricted statistical data and numerical models, so it's crucial to understand the limitations of those models. Essentially this post is about why we interpret the data in the way we do, and why getting the right answer just isn't good enough. And there'll be some galaxy evolutionary theory thrown in as well, just for good measure.
The missing satellite problem
Simulations have gone from simple gravity models of the 1940's to the all-singing all-dancing models of today, from using a few tens of particles to a few billion (or more). Nowadays they can include full hydrodynamics, heating and cooling of the gas affected by radiation, heat conduction and chemistry, magnetic fields, and basically be as sophisticated as hell. Their major limit is that a lot of parameters can't be set from observational measurements - we have to guess them. More on that later.
Example from the Illustris simulation. |
As above, raw particle data on the left with predicted visible light (in red and blue) on the right. |
Simulations are not magical
I used to think that because you know all the physics at work in a simulation, you automatically understand whatever it does. Yet while you do get full 3D information with complete time evolution, you rarely get a full understanding of what's happening. For start, simulations have restrictions just as observations do. Their resolution is limited (e.g. by the number of particles and computational power), they don't include all the physical processes at work (because some are hard to simulate, while others are just not fully understood), and what we decide to simulate in the first place is heavily influenced by observations - which have their own problems. So they are, necessarily, simplified. It's important to try and convert our numerical predictions into something we can directly and fairly compare with observations.
Simulation (left) and observation (right) of the Auriga's Wheel galaxies. |
While stars are generally simulated as n-bodies (point mass particles which have gravity but nothing else), the gas is more complex. There are two main ways of dealing with the hydrodynamic effects :
1) Smoothed particle hydrodynamics
In SPH codes the gas is modelled as a collection of particles. As well as having mass, each particle is deemed to be part of a kernel with its surrounding neighbours, over which the hydrodynamic equations can be solved. This then accounts for the variation in density, temperature, and pressure. In effect the particle data is transformed into something more like a continuous fluid.
With the kernels set to contain a fixed number of particles, the resolution of the simulation is adaptive : there are more computations where there are many interactions and less where there are fewer. And you can trace the history of each particle and find out where it originated. SPH suffers where there are sharp boundaries between different fluids though - it has difficulty reproducing observed structures.
2) Grid codes
Another approach is to do away with particles completely. Instead, a finite volume of space can be modelled as a grid of cells, each of which containing some fluid with density, temperature, pressure and velocity. Thus it models how gas can flow from cell to cell.
Cell sizes can vary so the resolution can be adaptive. Grid codes are much better at modelling hydrodynamic structures, but tend to be computationally expensive and there's no way of knowing where gas in any particular cell originated. So despite knowing all the initial conditions, there are fundamental restrictions on what you can learn from simulations.
Handle razors carefully !
Simulations, and especially the comparisons to observations, are complex beasts. Clearly there's some virtue in keeping things as simple as possible, but even here we have to be careful. There's a popular notion - spread by Jodie Foster in the film Contact - that Occam's Razor says that the simplest explanation tends to be right one. Occam, however, said no such thing. He said something more like, "entities must not be multiplied beyond necessity" - in essence, prefer simple explanations.
There are good reasons for this, but they have nothing to do with any kind of fundamental truth. Indeed, in science we should never presume to know how the world works : start thinking that the simplest explanation is usually correct and you rapidly degenerate into "a wizard did it". The Universe is a bloody complicated place, and sometimes it needs complex explanations.
John Von Neuman is reported to have said that with three free parameters he could fit an elephant, and with four he could make him wiggle his trunk. The more complex your explanation, the more you can adjust it to make it fit the observed data. Simpler explanations are much harder to fudge and therefore easier to test. But that absolutely does not mean that you never add complexity, because it's equally possible to over-simplify and miss some vital physical process.
In the case of the missing satellite problem, the complexity of the baryonic physics that we're missing from the pure dark matter simulations may have nothing to do with changing the halo structures at all. Instead it might be an example of a much more subtle selection effect.
Selection effects : correlation doesn't equal causation
We know the mass of the baryons is too small to affect the dark matter in our simulations. But we also know that the baryons are the only thing we can observe directly. So perhaps our simulations are missing some mechanism that restricts the presence of the baryons on only certain dark halos : maybe the rest do exist, but remain invisible. It's worth a brief digression here to show how important selection effects can be, and why statistical measurements can be woefully misleading.
The above chart comes from the fantastic website Spurious Correlations. This correlation is statistically significant, but physically meaningless. For a start, it's not at all clear which is the independent (controlling) variable : does excessive cheese consumption drive people insane and make them become entangled in their bedsheets, or do people commiserate bedsheet-based deaths by eating more cheese for some reason ? Both interpretations are equally absurd and the data says precisely nothing about which way round it goes.
From this paper. |
A second example : the pitch angle of spiral arms (a measure of how tightly wound they are) in galaxies correlates with the mass of their supermassive black hole. This is completely unexpected because the central black holes, though massive, are minuscule in comparison to most spiral galaxies. Local gravity sources (e.g. ordinary stars) ought to dominate at large distances - there's no plausible direct connection between the black hole and something as large as a spiral arm. But there might, the authors suggest, be a connection between a third factor such as the density profile of the dark matter.
Charts like those in Spurious Correlations are a variety of what's known as p-hacking : plot everything against everything and see what sticks. Surprisingly tight correlations can occur by chance if you plot enough variables together : what you're not being shown are the many variables which have no correlation whatsoever. Simply put, if something has a million to one chance of happening, if you give it a million opportunities to happen then it probably will. Other unexpected relations can occur because of common underlying factors with no direct connection between the two plotted variables.
Last time I mentioned different procedures for measuring the size of a galaxy, and we saw that despite being objective they gave very different results. As with automatic galaxy-finding algorithms that produce catalogues of low reliability, the point is that an objective procedure is not the same as being objectively correct. We'll see an example of this shortly and much more in lecture 4.
Unknown unknowns
So the observations of baryons may be severely limiting our view of the Universe. The naive expectation that adding in the baryons can't change the distribution of satellite galaxies predicted in the simulations may be over-simplifying : we might be witnessing a selection effect. Though it must be said that it isn't at all obvious as to the precise mechanism, it is at least conceivable that baryonic physics could limit which halos actually host visible galaxies.
Recently there's been some very interesting discoveries suggesting that that is indeed the case. While galaxies of especially low surface brightness have been known for ages, no-one thought they were numerically significant. That changed in 2015 with the discovery of 800 so-called ultra diffuse galaxies (UDGs) in the Coma cluster, galaxies which are about as large as the Milky Way but as much as a thousand times fainter.
UDGs have since been discovered in all kinds of environments, even in isolation. Most appear to be smooth and red but some are blue and structured, resembling standard LTGs but much fainter. Some are even known to have gas. At the moment, because UDGs are hard to identify, we can't say in which environment they're most common. More problematically, we can't quantify their typical dark matter content. If they're low mass, then UDGs at least alleviate (but do not solve) the missing satellite problem. But if they're massive, then they make things worse. It's of course possible that some are massive and some are not, but the important value is their typical mass, and that we just don't know at all.
If it disagrees with experiment, it's
You could be forgiven for thinking that there are enough problems with the standard model that we should just chuck it out and start again. There are indeed problems, but if we let every difficulty count as a falsification then every theory will have to be discarded. The point is that all our models have been over-simplifications, and without the full physics included we actually can't say if they're wrong or not : maybe they have fundamental problems, maybe they don't.
Occam's Razor is useful, but there is such a thing as over-simplifying. |
From this paper. The northern elliptical galaxy is included in the simulation but is just out of the field of view. |
A much larger third galaxy is clearly involved, but that wasn't included in the model. So the model has got the right answer - even in terms of quite fine structural details - by the wrong method ! Getting the right answer is a necessary but not sufficient condition for a good theory. The success of one model does not preclude the success of others.
Don't be hasty
We could turn to our models and say, "hmm, these all have problems, let's chuck them all out and start again", but this would be the wrong lesson to learn. A better lesson would be that if they have problems they need to be modified and improved : we must always be cautious. Only when we find a really deep flaw in the most fundamental nature of a model should we completely reject it.
While the standard dark matter model does have problems, it's also important to remember that it has tremendous successes as well. As well as reproducing the large-scale structure of the Universe, it also works extremely well at explaining colliding galaxy clusters. The Bullet Cluster and other cases show what happens after two clusters collide. Remember a cluster is a gravitationally bound structure containing its own dark matter, gas, and galaxies. If two clusters pass through each other, we'd expect the galaxies (bound by the collisioness dark matter of their parent clusters) to keep going but the gas to get stuck in the middle because of its large volume. That's exactly what happens, and, importantly, gravitational lensing confirms that the dark matter does exactly what it's supposed to. It's very hard to make this work without dark matter.
Overlaid on the optical images are X-ray gas in pink and dark matter (from lensing) in blue. |
There are two main theories as to how galaxies assembled themselves. One is the idea of monolithic collapse, that huge rotating gas clouds simply collapsed and went thwoooop to form a galaxy. Simulations show that this is very successful. The problem is that there's no evidence or reason to suppose that such monoliths ever existed. Physics instead points to the now-dominant paradigm of hierarchical merging, where galaxies assemble themselves through the cannibalistic merging of smaller galaxies. This has plenty of problems besides the missing satellite issue, but is a far more natural expectation based on our understanding of physics.
Not even wrong ?
I'll finish with some final statistical lessons that we'll need for the concluding post. As mentioned, objective procedures are not necessarily objectively correct. A really wonderful example of this is provided by the datasaurus project. In the gif below, at every frame of the animation the positions of the points have exactly the same mean and standard deviation of their positions !
What this means is that quantification can sometimes of of limited help. You can't quantify "dinosauriness" by a simple parameter like mean position : you have to look at the data yourself. You can, should, and indeed must make statistical quantifications for your analysis. But you also have to look at the damn data, because while using quantitative parameters is fine, relying on them exclusively is an absolutely dreadful idea.
This isn't an abstract mathematical idea either. The US air force suffered from the flaw of averages when they designed their fighter aircraft based on the average dimensions of their pilots, not realising that very few pilots indeed were close to the averages in all parameters : everyone really is unique just like everyone else. This meant a loss of planes and pilots because in a fighter plane it's really, really important you can reach the controls when you need to. The solution ? Instead of tailoring each plane to each pilot, they developed adjustable seats so pilots could set things for themselves no matter which plane they flew.
That's my philosophy of science rant over. The main take-home lessons are :
- Prefer simple explanations (but don't go thinking that simpler means its more likely to be true, it's just easier to test)
- Objective procedures are not necessarily objectively correct - and indeed there are some things you just can't quantify at all
- Different models can be equally successful - always try and test multiple explanations, because the fact that one model works well doesn't meant that others are disproven
- The interpretation of what the data means is down to you and you alone. No algorithm can tell you what the data really means. Do not avoid statistical testing, but don't avoid subjective judgements either.
Friday, 10 November 2017
Criticism versus bullying
During my final lecture on galaxy evolution (focusing on current problems), I rubbished the idea of planes of satellite galaxies. I wasn't very nice about it either, I was downright sarcastic. Still not sure if this was the right thing to do. I did explicitly warn the students that I was biased and that they should consult the literature for themselves and if at all possible talk to other people. I provided links to the original publications and alternative viewpoints in the PowerPoint file. I gave my honest, sincerely held opinion (I really do think this field is complete nonsense; honesty not always meaning very pleasant), but I still worry I may have gone a bit too far.
I should point out that the researchers in this field are almost all much more senior than me and/or in more distinguished positions (their papers tend to have hundreds of citations; mine have typically < 20). I would not, however, under any circumstances question people's competence in a peer-reviewed journal. I'm honestly not really sure if I ever crossed the line from saying "this research is bunk" to at least implying "these researchers are stupid" (or that they have done a stupid thing).
I had a conscious decision beforehand. I could have tried to present the various viewpoints (I'm not the only one who isn't convinced by this by any means) in a dispassionate way. There were several reasons I didn't do this. The first, main reason was that this would make for a deadly dull lecture, and I didn't want to inflict that on anyone - certainly not in a 90 minute lecture. The other reasons are of about equal importance : I'm rather peeved at the veracity with which the claims about such objects have been expressed and the way certain people have used very weak evidence to claim that cosmology is all wrong, I felt they deserved a bit of a kick in the complacency. The final, clinching argument occurred when I asked myself, "Can I stand in front of an audience, looking at this data set and tell them this claim has any serious merit ?". I finally decided that I couldn't do this - it would have been a lie (more accurately, my opinion is that it has no serious merit). I would have felt like was misleading people into believing there was a serious problem where in fact none exists. There's a line that has to be drawn somewhere : we don't teach students about the electric Universe or the Flat Earth model because they are utter garbage. This issue doesn't fall into that category by any means, but still I just couldn't stand there and say, "this is credible".
Well, anyway, you'll get the transcript in a couple of weeks when I write it up, so you can judge for yourself.
https://www.forbes.com/sites/startswithabang/2017/11/10/professional-disagreement-over-galaxies-escalates-into-bullying-and-harassment/
I should point out that the researchers in this field are almost all much more senior than me and/or in more distinguished positions (their papers tend to have hundreds of citations; mine have typically < 20). I would not, however, under any circumstances question people's competence in a peer-reviewed journal. I'm honestly not really sure if I ever crossed the line from saying "this research is bunk" to at least implying "these researchers are stupid" (or that they have done a stupid thing).
I had a conscious decision beforehand. I could have tried to present the various viewpoints (I'm not the only one who isn't convinced by this by any means) in a dispassionate way. There were several reasons I didn't do this. The first, main reason was that this would make for a deadly dull lecture, and I didn't want to inflict that on anyone - certainly not in a 90 minute lecture. The other reasons are of about equal importance : I'm rather peeved at the veracity with which the claims about such objects have been expressed and the way certain people have used very weak evidence to claim that cosmology is all wrong, I felt they deserved a bit of a kick in the complacency. The final, clinching argument occurred when I asked myself, "Can I stand in front of an audience, looking at this data set and tell them this claim has any serious merit ?". I finally decided that I couldn't do this - it would have been a lie (more accurately, my opinion is that it has no serious merit). I would have felt like was misleading people into believing there was a serious problem where in fact none exists. There's a line that has to be drawn somewhere : we don't teach students about the electric Universe or the Flat Earth model because they are utter garbage. This issue doesn't fall into that category by any means, but still I just couldn't stand there and say, "this is credible".
Well, anyway, you'll get the transcript in a couple of weeks when I write it up, so you can judge for yourself.
https://www.forbes.com/sites/startswithabang/2017/11/10/professional-disagreement-over-galaxies-escalates-into-bullying-and-harassment/
Wednesday, 8 November 2017
Paper submitted !!!
Will No-One Rid Me Of This Turbulent Sphere ?
Most detected neutral atomic hydrogen HI is found in close association with optically bright galaxies. However, a handful of HI clouds are known which appear to be optically dark and have no nearby potential progenitor galaxies, making tidal debris an unlikely explanation. In particular, 6 clouds identified by the Arecibo Galaxy Environment Survey are interesting due to the combination of their small size, isolation, and especially their broad line widths. A recent suggestion is that these clouds exist in pressure equilibrium with the intracluster medium, with the line width arising from turbulent internal motions. Here we explore that possibility by using the FLASH code to perform a series of 3D hydro simulations. Our clouds are modelled using spherical Gaussian density profiles, which are embedded in a hot, low-density gas representing the intracluster medium. The simulations account for heating and cooling of the gas, and we explore the effects of varying the structure and strength of their internal motions. We create synthetic HI spectra, and find that none of our simulations reproduce the observed cloud parameters for longer than ~100 Myr : the clouds either collapse, disperse, or experience rapid heating which would cause ionisation and render them undetectable to HI surveys. While the turbulent motions required to explain the high line widths generate structures which appear to be inherently unstable, making this an unlikely explanation for the observed clouds, these simulations demonstrate the importance of including the intracluster medium in any model seeking to explain the existence of these objects.
Includes an acknowledgement to king Henry II via Robert Minchin.
Most detected neutral atomic hydrogen HI is found in close association with optically bright galaxies. However, a handful of HI clouds are known which appear to be optically dark and have no nearby potential progenitor galaxies, making tidal debris an unlikely explanation. In particular, 6 clouds identified by the Arecibo Galaxy Environment Survey are interesting due to the combination of their small size, isolation, and especially their broad line widths. A recent suggestion is that these clouds exist in pressure equilibrium with the intracluster medium, with the line width arising from turbulent internal motions. Here we explore that possibility by using the FLASH code to perform a series of 3D hydro simulations. Our clouds are modelled using spherical Gaussian density profiles, which are embedded in a hot, low-density gas representing the intracluster medium. The simulations account for heating and cooling of the gas, and we explore the effects of varying the structure and strength of their internal motions. We create synthetic HI spectra, and find that none of our simulations reproduce the observed cloud parameters for longer than ~100 Myr : the clouds either collapse, disperse, or experience rapid heating which would cause ionisation and render them undetectable to HI surveys. While the turbulent motions required to explain the high line widths generate structures which appear to be inherently unstable, making this an unlikely explanation for the observed clouds, these simulations demonstrate the importance of including the intracluster medium in any model seeking to explain the existence of these objects.
Includes an acknowledgement to king Henry II via Robert Minchin.
Monday, 6 November 2017
Lecture 2/4 : Nothing Will Come Of Nothing
Second post from my lecture course on galaxy evolution. This one covers two major practical techniques : optical photometry (measuring brightness and other parameters) and HI spectroscopy (measuring the gas content from radio telescopes). For the full details see the complete transcript of the 90 minute lecture.
In this much shorter post I'll skip the gritty equations and just summarise the major methods, their uncertainties, and why the subjective aspect isn't as bad as you might think. This post is focused on methodology rather than galaxy evolutionary theory.
Optical photometry
There are several major parameters we can get very easily that can tell us some important information : brightness, colour, and size. There's no end to how difficult we can make these measurements, if we want to get sophisticated, so here's the most basic approach possible.
When we have an optical image of our galaxy, we can define an aperture around it in using specialised software. This is much like drawing a polygon in standard drawing programs. Unlike ordinary jpeg or png images though, we can manipulate the displayed data range in more complex ways so as to reveal fainter features. This often involves a lot of interacting with the data, trying out different ranges to see what works best. When we think we can see the faintest emission, we draw our aperture.
You can see in the example there are also some boxes with dashed outlines. These are what we have decided is background noise, while the circle with the red strikethrough is a masked region. When we get the software to make the measurement, it will sum up all the emission within the green ellipse, excluding any present within the mask. Then it will estimate the average background value using the boxes and subtract this. We keep the boxes close to the target galaxy as the background level can vary in complex ways (in this example it's about as flat as it ever gets), so we need similar values to what would be present at the location of the galaxy itself.
From this simple procedure we can directly measure the apparent brightness of the galaxy and also its angular diameter. If we know the galaxy's distance, we can easily convert these to absolute magnitude (i.e. how much energy the galaxy is emitting, or its stellar mass) and physical size.
Unlike images from a smartphone, in astronomy we save the data from different wavelengths separately. The image above uses one wavelength range, but if we want to measure the galaxy's colour we have to repeat the procedure using another wavelength range. Colour is defined as the difference of the measurements in the two wavebands (usually only slightly different from each other).
You might wonder if the subjective size of the aperture is a problem. For brightness this isn't really a big deal. As long as our data quality is high, then it doesn't really matter if we make the aperture too big : the sum of the noise beyond the edge of the galaxy will be close to zero. It's a lot more important when it comes to measuring size, but more on that shortly.
The above is an ideal case. In reality we often have to deal with bright foreground stars, clouds of dust in our own Galaxy, the difficulties of defining where interacting galaxies end, and instrumental effects that can make the noise look really weird. Some of these effects we can mitigate, but if they're too bad we have to admit defeat and discard those observations from our analysis.
We can also get the morphology of the galaxy by inspection of the images. Sometimes a classification will already be available in existing catalogues. If not, we have to decide for ourselves. This is really just a judgement call. We can check if the galaxy has specific features but there's no (good) automatic solution to this.
The photometric measurements of a single galaxy are usually incredibly boring. But if we have a large sample we can already start to do very useful science. For example, we can construct a luminosity function, which shows the distribution of galaxies of given (absolute) luminosities.
This shows three examples of a particular function (called a Schecter function, which is often used as it's a good fit to real data), each with a slightly different slope of the faint end (α). In all cases, there are clearly far more galaxies of low luminosity than there are bright galaxies. Indeed, above a certain threshold the number drops very rapidly. The slope of the linear, faint-end part is controversial. Cosmological simulations have predicted it should be a lot steeper than is observed : that is, there should be more faint galaxies than we actually detect. This boring-looking graph reveals one the biggest problems in modern cosmology !
We can also plot a colour-magnitude diagram like we saw last time. The key, take-home message from this post is that it's not so much about what data we have, it's all about what comparisons we make. Having a bunch of data, even really good data, means diddly-squat by itself. But divide the sample cleverly and trends can be revealed that can tell us what galaxies are up to.
This example shows galaxies in the Virgo cluster. LTGs are blue squares whereas ETGs are small triangles. LTGs which are strongly deficient in gas (more later) are green blobs. The paints quite a convincing picture of gas loss driving morphology evolution, where the lack of gas from a LTG prevents further star formation and eventually transforms it in an ETG, slowly moving up the colour-magnitude diagram.
The by-eye methods described are not the only ones available. It is possible to be much more objective, it's just more difficult. The aperture photometry gives us the total amount of light coming from a galaxy, but if we divide it into smaller regions we can construct a surface brightness profile. This shows how the amount of light emitted per unit area varies radially throughout the galaxy.
Different types of galaxy have different shapes of profile, so this is one way to supplement our subjective judgement on morphology with something more rigorous and quantifiable. Surface brightness profiles sound like a fantastic improvement on aperture photometry, but they have limitations besides being harder to construct. They are impractical for large galaxy samples, still involve some amount of subjective decision-making, and there are many galaxies for which they simply can't be done : those which are very small or have weird morphologies, for example. So aperture photometry still has an important role to play.
The shape of the surface brightness profiles hints at the final issue for optical data - deciding on a measurement of the galaxy's size. There are different conventions adopted, not all of which are appropriate for every galaxy. We could measure the size at some fixed surface brightness level (the isophotal radius, e.g. the Holmberg radius), or we could find some particular part of the profile (e.g. where it begins to drop rapidly), or extrapolate it further based on the best-fit function. One popular measure is the half-light radius, also known as the effective radius. This is the radius enclosing half of the galaxy's light. Because the profile can be very much steeper in the centre, the effective radius can be much smaller than the isophotal radius. For example the effective radius of the Milky Way is estimated at around 3.6 kpc, whereas the isophotal value is more like 15 kpc.
Measuring the atomic gas
Virtually all problems in extragalactic science revolve around star formation. We know that gaseous clouds can collapse under their own gravity, but almost all the details of this are controversial. The rate of collapse is affected by the chemistry of the gas, which determines how quickly it radiates away heat that would otherwise exert an outward pressure. Chemistry is affected by star formation, with massive stars converting light elements into heavier ones. And the gas itself can be in different phases : hot, ionised gas (with temperatures > 1 million Kelvin and emitting X-rays), warm neutral atomic gas (at 10,000 K) and cold molecular gas (typically, say, ~500 K).
Atomic hydrogen gas (HI, pronounced H one) is sometimes described as the reservoir of fuel for star formation, though the current thinking is that the gas has to cool to molecular densities before it can form stars. The atomic gas is probably only involved more indirectly, though it still has a role to play. But observationally, the atomic gas has a number of advantages : it's easier to measure and usually more massive and extended compared to the molecular gas.
In some cases we can map the HI directly (shown in red in the above figure), just like making an optical map. Unlike the optical images we can also directly measure the line of sight velocity of the gas, allowing us to construct a rotation curve. This shows how fast the galaxy is rotating at different points. The disagreement between the theoretical prediction, based on the mass of the visible stars and gas, and the observed shape of the curve is one of the most important discoveries in radio astronomy, and is one of the main reasons we infer the presence of dark matter.
Here, the central velocity (about 1,000 km/s) indicates the systemic velocity of the galaxy. We can use Hubble's Law to estimate its distance based on this. We see that emission is detected over the velocity range ~800 - 1,200 km/s, giving us an indication of the rotation speed (it would be half this line width, after we correct for the galaxy's orientation with respect to us).
This examples shows a very distinct double-horn (a.k.a. "Batman") profile, typical of spiral galaxies (deviations from this can indicate the galaxy might be interacting with something). If you look back to the rotation curve figure, you see it's mainly flat, with most of the gas moving at a single velocity relative to us. Of course, since the galaxy is rotating, about half the gas is moving away from us and half is moving towards us. Hence we get these two brighter "horns" representing the majority of gas at this single rotation velocity on different sides of the galaxy respective to us.
The total area under the curve can be measured, and that gives us the total HI flux. It's not that dissimilar to the optical measurements : we choose which part of the spectrum to measure, and the software does the integration and background subtraction for us. And once we've got the flux, we can convert it into HI mass if we know the distance.
As mentioned, the line width has to be corrected for the viewing angle to get the rotation. If we assume the galaxy is actually circular it becomes easy to calculate this - we just have to measure its major and minor axial diameter from the optical data.
Finding gas
Modern HI surveys typically map large areas of the sky, but at low resolution. Somehow we have to turn data like this :
https://www.youtube.com/watch?time_continue=1&=&v=XmpGTGkaFg4
... into a nice catalogue of galaxies. The animation shows a series of slices through a data set. Each elongated bright blob is the HI component of a galaxy.
I won't cover the extraction procedures in detail, but it boils down to someone going through frame by frame (or rather, channel by channel) and deciding where they think the HI is present. This is obviously imperfect. The observer might find something that looks real but isn't, or miss real sources. Fortunately, afterwards we can do simple follow-up observations to quickly confirm or disprove the candidate sources in a nice objective way. And this leads to two key parameters of our HI catalogue :
Completeness is defined as the fraction of real sources present that are in your catalogue. If you should somehow find all of them then your catalogue is 100% complete.
Reliability is defined as the fraction of sources in your catalogue which are real. If all your sources are real then your catalogue is 100% reliable, but it would be very unlikely to also be 100% complete.
Why reliability is easy to measure through repeat observations, completeness is much, much more difficult - but more on that next time. Knowing these parameters can be useful in everyday life : think about what someone really means if they say a procedure is 90% reliable !
Putting it all together
With all this optical and radio data, and knowing some basic statistical parameters to check, we can do quite a lot. We have stellar mas, colour, size, morphology, gas mass, and rotation speed. We can also get HI deficiency alluded to earlier. Studies on large numbers of galaxies have revealed that in the field, knowing the size and morphology of a galaxy enables a fairly accurate prediction of its HI mass. By comparing the actual mass of a galaxy with its prediction, we can work out if it has more or less gas than expected. Typically, galaxies in clusters have significantly less gas - by a factor 10 or more - than comparable field galaxies. This deficiency measurement gives us another parameter we can use to divide up our sample.
We'd also like to have the total dynamical mass of a galaxy. The rotation curves indicate how much dark matter is present, but we can also get a crude estimate of this from the line width. For this we need to know the radius of the HI, which we can't measure directly. Fortunately all those field galaxy studies have found that the HI is typically extended by around a factor of 1.7 compared to the optical size of a galaxy, so that gives us something reasonable to work with. Not perfect, but decent.
Let's finish off with two take-home messages :
In this much shorter post I'll skip the gritty equations and just summarise the major methods, their uncertainties, and why the subjective aspect isn't as bad as you might think. This post is focused on methodology rather than galaxy evolutionary theory.
Optical photometry
There are several major parameters we can get very easily that can tell us some important information : brightness, colour, and size. There's no end to how difficult we can make these measurements, if we want to get sophisticated, so here's the most basic approach possible.
When we have an optical image of our galaxy, we can define an aperture around it in using specialised software. This is much like drawing a polygon in standard drawing programs. Unlike ordinary jpeg or png images though, we can manipulate the displayed data range in more complex ways so as to reveal fainter features. This often involves a lot of interacting with the data, trying out different ranges to see what works best. When we think we can see the faintest emission, we draw our aperture.
Aperture photometry using the popular ds9 software. |
You can see in the example there are also some boxes with dashed outlines. These are what we have decided is background noise, while the circle with the red strikethrough is a masked region. When we get the software to make the measurement, it will sum up all the emission within the green ellipse, excluding any present within the mask. Then it will estimate the average background value using the boxes and subtract this. We keep the boxes close to the target galaxy as the background level can vary in complex ways (in this example it's about as flat as it ever gets), so we need similar values to what would be present at the location of the galaxy itself.
From this simple procedure we can directly measure the apparent brightness of the galaxy and also its angular diameter. If we know the galaxy's distance, we can easily convert these to absolute magnitude (i.e. how much energy the galaxy is emitting, or its stellar mass) and physical size.
Unlike images from a smartphone, in astronomy we save the data from different wavelengths separately. The image above uses one wavelength range, but if we want to measure the galaxy's colour we have to repeat the procedure using another wavelength range. Colour is defined as the difference of the measurements in the two wavebands (usually only slightly different from each other).
You might wonder if the subjective size of the aperture is a problem. For brightness this isn't really a big deal. As long as our data quality is high, then it doesn't really matter if we make the aperture too big : the sum of the noise beyond the edge of the galaxy will be close to zero. It's a lot more important when it comes to measuring size, but more on that shortly.
The above is an ideal case. In reality we often have to deal with bright foreground stars, clouds of dust in our own Galaxy, the difficulties of defining where interacting galaxies end, and instrumental effects that can make the noise look really weird. Some of these effects we can mitigate, but if they're too bad we have to admit defeat and discard those observations from our analysis.
Example of a problematic foreground star. |
The same patch of sky viewed through two filters, one of which cause horrendous fringing. |
We can also get the morphology of the galaxy by inspection of the images. Sometimes a classification will already be available in existing catalogues. If not, we have to decide for ourselves. This is really just a judgement call. We can check if the galaxy has specific features but there's no (good) automatic solution to this.
The photometric measurements of a single galaxy are usually incredibly boring. But if we have a large sample we can already start to do very useful science. For example, we can construct a luminosity function, which shows the distribution of galaxies of given (absolute) luminosities.
Distribution functions are a lot like histograms, except with many bins. |
We can also plot a colour-magnitude diagram like we saw last time. The key, take-home message from this post is that it's not so much about what data we have, it's all about what comparisons we make. Having a bunch of data, even really good data, means diddly-squat by itself. But divide the sample cleverly and trends can be revealed that can tell us what galaxies are up to.
This example shows galaxies in the Virgo cluster. LTGs are blue squares whereas ETGs are small triangles. LTGs which are strongly deficient in gas (more later) are green blobs. The paints quite a convincing picture of gas loss driving morphology evolution, where the lack of gas from a LTG prevents further star formation and eventually transforms it in an ETG, slowly moving up the colour-magnitude diagram.
The by-eye methods described are not the only ones available. It is possible to be much more objective, it's just more difficult. The aperture photometry gives us the total amount of light coming from a galaxy, but if we divide it into smaller regions we can construct a surface brightness profile. This shows how the amount of light emitted per unit area varies radially throughout the galaxy.
Examples of typical surface brightness profiles. |
The shape of the surface brightness profiles hints at the final issue for optical data - deciding on a measurement of the galaxy's size. There are different conventions adopted, not all of which are appropriate for every galaxy. We could measure the size at some fixed surface brightness level (the isophotal radius, e.g. the Holmberg radius), or we could find some particular part of the profile (e.g. where it begins to drop rapidly), or extrapolate it further based on the best-fit function. One popular measure is the half-light radius, also known as the effective radius. This is the radius enclosing half of the galaxy's light. Because the profile can be very much steeper in the centre, the effective radius can be much smaller than the isophotal radius. For example the effective radius of the Milky Way is estimated at around 3.6 kpc, whereas the isophotal value is more like 15 kpc.
Measuring the atomic gas
Virtually all problems in extragalactic science revolve around star formation. We know that gaseous clouds can collapse under their own gravity, but almost all the details of this are controversial. The rate of collapse is affected by the chemistry of the gas, which determines how quickly it radiates away heat that would otherwise exert an outward pressure. Chemistry is affected by star formation, with massive stars converting light elements into heavier ones. And the gas itself can be in different phases : hot, ionised gas (with temperatures > 1 million Kelvin and emitting X-rays), warm neutral atomic gas (at 10,000 K) and cold molecular gas (typically, say, ~500 K).
Atomic hydrogen gas (HI, pronounced H one) is sometimes described as the reservoir of fuel for star formation, though the current thinking is that the gas has to cool to molecular densities before it can form stars. The atomic gas is probably only involved more indirectly, though it still has a role to play. But observationally, the atomic gas has a number of advantages : it's easier to measure and usually more massive and extended compared to the molecular gas.
NGC 628 from the optical SDSS and the HI survey THINGS. |
A classic HI spectral profile. |
This examples shows a very distinct double-horn (a.k.a. "Batman") profile, typical of spiral galaxies (deviations from this can indicate the galaxy might be interacting with something). If you look back to the rotation curve figure, you see it's mainly flat, with most of the gas moving at a single velocity relative to us. Of course, since the galaxy is rotating, about half the gas is moving away from us and half is moving towards us. Hence we get these two brighter "horns" representing the majority of gas at this single rotation velocity on different sides of the galaxy respective to us.
The total area under the curve can be measured, and that gives us the total HI flux. It's not that dissimilar to the optical measurements : we choose which part of the spectrum to measure, and the software does the integration and background subtraction for us. And once we've got the flux, we can convert it into HI mass if we know the distance.
As mentioned, the line width has to be corrected for the viewing angle to get the rotation. If we assume the galaxy is actually circular it becomes easy to calculate this - we just have to measure its major and minor axial diameter from the optical data.
Finding gas
Modern HI surveys typically map large areas of the sky, but at low resolution. Somehow we have to turn data like this :
https://www.youtube.com/watch?time_continue=1&=&v=XmpGTGkaFg4
... into a nice catalogue of galaxies. The animation shows a series of slices through a data set. Each elongated bright blob is the HI component of a galaxy.
I won't cover the extraction procedures in detail, but it boils down to someone going through frame by frame (or rather, channel by channel) and deciding where they think the HI is present. This is obviously imperfect. The observer might find something that looks real but isn't, or miss real sources. Fortunately, afterwards we can do simple follow-up observations to quickly confirm or disprove the candidate sources in a nice objective way. And this leads to two key parameters of our HI catalogue :
Completeness is defined as the fraction of real sources present that are in your catalogue. If you should somehow find all of them then your catalogue is 100% complete.
Reliability is defined as the fraction of sources in your catalogue which are real. If all your sources are real then your catalogue is 100% reliable, but it would be very unlikely to also be 100% complete.
Why reliability is easy to measure through repeat observations, completeness is much, much more difficult - but more on that next time. Knowing these parameters can be useful in everyday life : think about what someone really means if they say a procedure is 90% reliable !
Putting it all together
With all this optical and radio data, and knowing some basic statistical parameters to check, we can do quite a lot. We have stellar mas, colour, size, morphology, gas mass, and rotation speed. We can also get HI deficiency alluded to earlier. Studies on large numbers of galaxies have revealed that in the field, knowing the size and morphology of a galaxy enables a fairly accurate prediction of its HI mass. By comparing the actual mass of a galaxy with its prediction, we can work out if it has more or less gas than expected. Typically, galaxies in clusters have significantly less gas - by a factor 10 or more - than comparable field galaxies. This deficiency measurement gives us another parameter we can use to divide up our sample.
We'd also like to have the total dynamical mass of a galaxy. The rotation curves indicate how much dark matter is present, but we can also get a crude estimate of this from the line width. For this we need to know the radius of the HI, which we can't measure directly. Fortunately all those field galaxy studies have found that the HI is typically extended by around a factor of 1.7 compared to the optical size of a galaxy, so that gives us something reasonable to work with. Not perfect, but decent.
Let's finish off with two take-home messages :
- Comparisons are king. It's nice to have really good data, but far more important to have a good sample and clever ways to divide it. Sometimes we can see nice dramatic examples of galaxies undergoing change, but to work out whether this is important overall we need good statistical data.
- The procedures we use are often messy. They aren't black-and-white cases where we can ever be truly certain about our precision, and some where objective measurements are actually inferior to subjective judgements. But this subjectivity is limited : you can't really accidentally measure colours so badly that you don't see a red or blue sequence.
In the next two posts we'll look at the importance of statistics and subjective/objective measurements in a lot more detail.
Subscribe to:
Posts (Atom)
Giants in the deep
Here's a fun little paper about hunting the gassiest galaxies in the Universe. I have to admit that FAST is delivering some very impres...
-
Of course you can prove a negative. In one sense this can be the easiest thing in the world : your theory predicts something which doesn...
-
Why Philosophy Matters for Science : A Worked Example "Fox News host Chris Wallace pushed Republican presidential candidate to expand...
-
In the last batch of simulations, we dropped a long gas stream into the gravitational potential of a cluster to see if it would get torn...