Valid criticisms, I think. I suppose the intention is that peer review will be sufficient to weed out the crazies* (maybe the reviewer's names should be made public ?). I'd like to see what happens : maybe it will degenerate into farce, maybe it will produce something interesting. The journal itself is the experiment...
* Cough cough TIME TRAVELLING ALIEN OCTOPUS cough cough cough...
There is one major problem here though :
For the reader of a paper, attaching authors to papers is important to help them decide how seriously to take the results. Here the difference between anonymous and pseudonymous authorship becomes important: if an author uses the same pseudonym over a period of time, the academic community can begin to get a sense of how good their work is (consider the Bourbaki pseudonym, which has been in use long enough to get a track-record), but if a publication is anonymous, the audience must rely solely on the credibility of the publishing journal and its editors.
What about the content itself ? Judging the content by the author is something we'd do well to avoid. Maybe all papers should be anonymous for six months after publishing, or something. I dunno. Anyway, I'm curious to see what happens with this.
THE CONTROVERSIAL JOURNAL OF CONTROVERSIAL IDEAS
"The Journal of Controversial Ideas ...proposes to allow academics to publish papers on controversial topics under a pseudonym. The hope is that this will allow researchers to write freely on controversial topics without the danger of social disapproval or threats. Thus the journal removes the author’s motivations, conflicts of interests and worldview from the presentation of a potentially controversial idea. This proposal heralds the death of the academic author – and, unlike Barthes, we think believe this is a bad thing."
"Defenders of The Journal of Controversial Ideas see it as a forum for true academic freedom. While academic freedom is important, it is not an unlimited right. Freedom without responsibility is recklessness. It is a lack of regard for the danger or consequences of one’s ideas. True academic freedom does not mean that writers get to choose when to avoid controversy. The pseudonymous authorship proposal allows authors to manipulate the credit and blame systems of the academy in the name of academic freedom."
"When it is working well, academic inquiry is a conversation. Researchers make claims and counterclaims, exchange reasons, and work together to open up new fields of inquiry. A conversation needs speakers: we need to keep track of who is talking, what they have said before, and who they are talking to. Pseudonymous authorship is an opt-out from the conversation, and the academic community will be worse off if its members no longer want to engage in intellectual conversation."
http://theconversation.com/the-journal-of-controversial-ideas-its-academic-freedom-without-responsibility-and-thats-recklessness-107106?utm_medium=Social&utm_source=Facebook#Echobox=1542706990
Sister blog of Physicists of the Caribbean. Shorter, more focused posts specialising in astronomy and data visualisation.
Thursday, 22 November 2018
Tuesday, 20 November 2018
Risky research means failure is always an option
In astronomy we often have to do repeat observations of potential detections to confirm they're real. A good confirmation rate is about 50%. Much less than this and we'd be wasting telescope time, and we'd start to worry that some of the sources we thought were real might not be so secure. Conversely, a much higher fraction would also be a waste of time, and would imply that we hadn't been as careful in our search as we thought - there'd still be other interesting things hidden in the data that we hadn't seen.
I suggest that this is also true to some extent in psychology. There seems a science-wide call for more risky, controversial research. Well, risky, controversial research requires a certain failure rate : if every finding was replicated, that would suggest the research wasn't been risky enough; if none of them were, that would imply lousy research practises. The actual replication rate turns out to be, by happy coincidence, about 50%.
But likewise, in astronomy we don't write a paper in which we consider sources we haven't confirmed yet (or at least it's a very bad idea to do so). We wait until we've got those repeat observations before drawing any conclusions. Risky, preliminary pilot studies ought to have a failure rate by definition, otherwise they wouldn't be risky at all. The big "end-result" studies on the other hand, the ones that are actually used to draw secure conclusions and, in the case of psychology, influence social policy, well those you'd want at least their basic results to be on a secure footing.
The Many Labs 2 project was specifically designed to address these criticisms. With 15,305 participants in total, the new experiments had, on average, 60 times as many volunteers as the studies they were attempting to replicate. The researchers involved worked with the scientists behind the original studies to vet and check every detail of the experiments beforehand. And they repeated those experiments many times over, with volunteers from 36 different countries, to see if the studies would replicate in some cultures and contexts but not others.
Despite the large sample sizes and the blessings of the original teams, the team failed to replicate half of the studies it focused on. It couldn’t, for example, show that people subconsciously exposed to the concept of heat were more likely to believe in global warming, or that moral transgressions create a need for physical cleanliness in the style of Lady Macbeth, or that people who grow up with more siblings are more altruistic. And as in previous big projects, online bettors were surprisingly good at predicting beforehand which studies would ultimately replicate. Somehow, they could intuit which studies were reliable.
Maybe anecdotes are evidence, after all... :P
Many Labs 2 “was explicitly designed to examine how much effects varied from place to place, from culture to culture,” says Katie Corker, the chair of the Society for the Improvement of Psychological Science. “And here’s the surprising result: The results do not show much variability at all.” If one of the participating teams successfully replicated a study, others did, too. If a study failed to replicate, it tended to fail everywhere.
Many researchers have noted that volunteers from Western, educated, industrialized, rich, and democratic countries—weird nations—are an unusual slice of humanity who think differently than those from other parts of the world. In the majority of the Many Labs 2 experiments, the team found very few differences between weird volunteers and those from other countries. But Miyamoto notes that its analysis was a little crude—in considering “non-weird countries” together, it’s lumping together people from cultures as diverse as Mexico, Japan, and South Africa. “Cross-cultural research,” she writes, “must be informed with thorough analyses of each and all of the cultural contexts involved.”
Sanjay Srivastava from the University of Oregon says the lack of variation in Many Labs 2 is actually a positive thing. Sure, it suggests that the large number of failed replications really might be due to sloppy science. But it also hints that the fundamental business of psychology—creating careful lab experiments to study the tricky, slippery, complicated world of the human mind—works pretty well. “Outside the lab, real-world phenomena can and probably do vary by context,” he says. “But within our carefully designed studies and experiments, the results are not chaotic or unpredictable. That means we can do valid social-science research.”
https://www.theatlantic.com/science/archive/2018/11/psychologys-replication-crisis-real/576223/
I suggest that this is also true to some extent in psychology. There seems a science-wide call for more risky, controversial research. Well, risky, controversial research requires a certain failure rate : if every finding was replicated, that would suggest the research wasn't been risky enough; if none of them were, that would imply lousy research practises. The actual replication rate turns out to be, by happy coincidence, about 50%.
But likewise, in astronomy we don't write a paper in which we consider sources we haven't confirmed yet (or at least it's a very bad idea to do so). We wait until we've got those repeat observations before drawing any conclusions. Risky, preliminary pilot studies ought to have a failure rate by definition, otherwise they wouldn't be risky at all. The big "end-result" studies on the other hand, the ones that are actually used to draw secure conclusions and, in the case of psychology, influence social policy, well those you'd want at least their basic results to be on a secure footing.
The Many Labs 2 project was specifically designed to address these criticisms. With 15,305 participants in total, the new experiments had, on average, 60 times as many volunteers as the studies they were attempting to replicate. The researchers involved worked with the scientists behind the original studies to vet and check every detail of the experiments beforehand. And they repeated those experiments many times over, with volunteers from 36 different countries, to see if the studies would replicate in some cultures and contexts but not others.
Despite the large sample sizes and the blessings of the original teams, the team failed to replicate half of the studies it focused on. It couldn’t, for example, show that people subconsciously exposed to the concept of heat were more likely to believe in global warming, or that moral transgressions create a need for physical cleanliness in the style of Lady Macbeth, or that people who grow up with more siblings are more altruistic. And as in previous big projects, online bettors were surprisingly good at predicting beforehand which studies would ultimately replicate. Somehow, they could intuit which studies were reliable.
Maybe anecdotes are evidence, after all... :P
Many Labs 2 “was explicitly designed to examine how much effects varied from place to place, from culture to culture,” says Katie Corker, the chair of the Society for the Improvement of Psychological Science. “And here’s the surprising result: The results do not show much variability at all.” If one of the participating teams successfully replicated a study, others did, too. If a study failed to replicate, it tended to fail everywhere.
Many researchers have noted that volunteers from Western, educated, industrialized, rich, and democratic countries—weird nations—are an unusual slice of humanity who think differently than those from other parts of the world. In the majority of the Many Labs 2 experiments, the team found very few differences between weird volunteers and those from other countries. But Miyamoto notes that its analysis was a little crude—in considering “non-weird countries” together, it’s lumping together people from cultures as diverse as Mexico, Japan, and South Africa. “Cross-cultural research,” she writes, “must be informed with thorough analyses of each and all of the cultural contexts involved.”
Sanjay Srivastava from the University of Oregon says the lack of variation in Many Labs 2 is actually a positive thing. Sure, it suggests that the large number of failed replications really might be due to sloppy science. But it also hints that the fundamental business of psychology—creating careful lab experiments to study the tricky, slippery, complicated world of the human mind—works pretty well. “Outside the lab, real-world phenomena can and probably do vary by context,” he says. “But within our carefully designed studies and experiments, the results are not chaotic or unpredictable. That means we can do valid social-science research.”
https://www.theatlantic.com/science/archive/2018/11/psychologys-replication-crisis-real/576223/
Thursday, 15 November 2018
It's not what you know, it's who you know
Does being on a telescope time allocation committee get you a better chance of being awarded observing time ? Yes, says Jane Greaves of Cardiff University (who I do not know) - it boosts your chances by a factor of three. And this doesn't seem to be because being on the TAC gives you better knowledge of how to write a good proposal, because when people stop serving on the TAC, their success rate drops right back down again. They probably don't submit a massively higher number of proposals either, since this is usually a very time-consuming procedure. Could they be motivated to write the best possible proposals while on the TAC but don't care so much afterwards ? I guess, but it doesn't seem likely.
The obvious and most likely inference is that TACs are biased towards serving members. Someone should give a sample of proposals to external members for review and compare their scores with that of the TAC.
https://arxiv.org/pdf/1811.05790.pdf
The obvious and most likely inference is that TACs are biased towards serving members. Someone should give a sample of proposals to external members for review and compare their scores with that of the TAC.
https://arxiv.org/pdf/1811.05790.pdf
Tuesday, 13 November 2018
A Gigantic Stealthy Dwarf With Lazy Stars
This really needs a press release when it's accepted for publication.
Astronomers love three-word acronyms, preferably containing the word "ultra" because it makes us feel ultra-important. Also we're hugely unimaginative at naming things, as the Very Large Array testifies. Anyway, while I'm especially interested in Ultra Diffuse Galaxies - big, fluffy star systems that may or may not be chock-full of dark matter - Ultra Faint Galaxies are interesting too. Not so much my speciality though, so bear that in mind.
Ultra Diffuse Galaxies are defined as having few stars per unit area. But because their total area can be very large, overall they can be quite "bright", at least in the sense of radiating lots of energy. Imagine if you could make a light that sent out the same total power of a floodlight but was ten metres on a side - close up, it'd look pretty dim to the eye, even though the total amount of energy per second was the same as a smaller floodlight.
Ultra Faint Galaxies, in contrast, are defined simply by the total amount of light they emit. They can be small and compact or big and fluffy.
This UFG is of the big and fluffy variety (not as big and fluffy as UDGs mind you). The paper is unusually thorough and complete, describing the discovery, follow-up observations, significance, and modelling. They even comment on the chemistry and possible gamma-ray emission of the object. And the icing on the cake is they actually manage to make the paper readable, so huge kudos to them for that.
While UDGs can be detected at large distances, UFGs are really only detectable in our Local Group. They're important because they might help understand the missing satellite problem (that models predict that we should find more small, nearby galaxies than we actually do) and also for studying galaxy dynamics. One such recent discovery (Crater II) was found to have unusually slow-moving stars, which, taken at face value, contradicts the standard model where galaxies are all dominated by massive dark matter halos - generally their stars are moving much more quickly.
It would be a mistake to think that Crater II is definitive evidence against the dark matter model though. While such an object is indeed compatible with alternative theories of gravity, it's also possible that it's simply lost much of its dark matter through tidal encounters. With pathetically small statistics, every object discovered in this class is significant.
That's where Antila 2 comes in. The authors discovered this using Gaia data. Gaia provides direct distance measurements to nearby stars but also proper motion (that's motion across the sky) data as well. In this case, it was by looking at the proper motions that the authors noticed a group of stars that hadn't been seen before. Gaia also makes this much easier in this region, where the density of stars, gas and dust towards the plane of the Galactic disc makes it difficult to spot anything at all. And by the standards of dwarf galaxies, Antila 2 is a biggie - much bigger than Crater II, and even comparable in size to the Large Magellanic Cloud (which has been known since prehistoric times). Only its incredible faintness - it's 4,000 times fainter than the LMC ! - and crowded location have kept it hidden for this long. That's no match for Gaia, however.
Antila 2 is also very cool. That is, like Crater II, its stars aren't moving very quickly. Unlike larger galaxies it doesn't seem to be rotating at all, the stars are just buzzing around randomly. That's not at all unusual for dwarf galaxies. What is unusual is that the stars only appear to be moving at around ~6 km/s, whereas for an object this size, ~20 km/s might be expected. Taken at face value, this would mean that Antila 2's dark matter halo has the lowest density of any such halo. So how could the stars end up being so dang lazy ? Is it a super-extreme object or did it start life as something more normal and have lethargy thrust upon it ?
There are several possibilities. One is that maybe the shape of the dark matter isn't typical. The usual assumption, based on models, is that dark matter halos have a central "cusp" (a horrible term we just have to live with), meaning a rapid increase in density in the centre. Antila 2 might instead have a "core" - a flatter density distribution in the centre. This could happen in two ways : 1) Early feedback (explosions and winds) by young stars could have removed so much gas that the sheer mass of the moving material could have disrupted the dark matter by its gravitational influence; 2) A tidal encounter with another galaxy (i.e. the Milky Way) could have stripped away much of its dark matter. In either case the end result is that there wouldn't be so much extra mass to accelerate the stars. Any stars which were moving too quickly would have been removed, and the pathetic remnant of the dark matter halo would only have been massive enough to hold on to the most sluggish.
The authors test these scenarios. Neither seems to work by itself, but together they might be able to do it. Thanks to the proper motion data of Gaia, they're able to work out the orbit of the galaxy so they can find out how close it's come to the Milky Way and thus they can estimate the tidal forces. Their initial conditions are necessarily a bit speculative but based on more typical dwarf galaxies. What seems to work is an initially cored dwarf (presumably formed via feedback) that then has a few disruptive orbits around the Milky Way.
There's some observational evidence to support this. Antila 2 appears to be stretched in its direction of motion, its chemical content appears unusual for its brightness (suggesting much of its original stellar content has been lost). On the other hand, the disruption ought to make the object more spherical than observed, but it's not certain if this is a crippling problem or not. Such an object would be able to survive for a few gigayears - long, but it probably implies it fell into the Milky Way's orbit much later than other satellites.
Overall, the conclusions are starkly different to the final sentence in the abstract saying this object may challenge the cold dark mater model, but that was the only inconsistency I spotted. They deserve a press release for this, I just hope it's as good as the paper. :)
http://adsabs.harvard.edu/abs/2018arXiv181104082T
Astronomers love three-word acronyms, preferably containing the word "ultra" because it makes us feel ultra-important. Also we're hugely unimaginative at naming things, as the Very Large Array testifies. Anyway, while I'm especially interested in Ultra Diffuse Galaxies - big, fluffy star systems that may or may not be chock-full of dark matter - Ultra Faint Galaxies are interesting too. Not so much my speciality though, so bear that in mind.
Ultra Diffuse Galaxies are defined as having few stars per unit area. But because their total area can be very large, overall they can be quite "bright", at least in the sense of radiating lots of energy. Imagine if you could make a light that sent out the same total power of a floodlight but was ten metres on a side - close up, it'd look pretty dim to the eye, even though the total amount of energy per second was the same as a smaller floodlight.
Ultra Faint Galaxies, in contrast, are defined simply by the total amount of light they emit. They can be small and compact or big and fluffy.
This UFG is of the big and fluffy variety (not as big and fluffy as UDGs mind you). The paper is unusually thorough and complete, describing the discovery, follow-up observations, significance, and modelling. They even comment on the chemistry and possible gamma-ray emission of the object. And the icing on the cake is they actually manage to make the paper readable, so huge kudos to them for that.
While UDGs can be detected at large distances, UFGs are really only detectable in our Local Group. They're important because they might help understand the missing satellite problem (that models predict that we should find more small, nearby galaxies than we actually do) and also for studying galaxy dynamics. One such recent discovery (Crater II) was found to have unusually slow-moving stars, which, taken at face value, contradicts the standard model where galaxies are all dominated by massive dark matter halos - generally their stars are moving much more quickly.
It would be a mistake to think that Crater II is definitive evidence against the dark matter model though. While such an object is indeed compatible with alternative theories of gravity, it's also possible that it's simply lost much of its dark matter through tidal encounters. With pathetically small statistics, every object discovered in this class is significant.
That's where Antila 2 comes in. The authors discovered this using Gaia data. Gaia provides direct distance measurements to nearby stars but also proper motion (that's motion across the sky) data as well. In this case, it was by looking at the proper motions that the authors noticed a group of stars that hadn't been seen before. Gaia also makes this much easier in this region, where the density of stars, gas and dust towards the plane of the Galactic disc makes it difficult to spot anything at all. And by the standards of dwarf galaxies, Antila 2 is a biggie - much bigger than Crater II, and even comparable in size to the Large Magellanic Cloud (which has been known since prehistoric times). Only its incredible faintness - it's 4,000 times fainter than the LMC ! - and crowded location have kept it hidden for this long. That's no match for Gaia, however.
Antila 2 is also very cool. That is, like Crater II, its stars aren't moving very quickly. Unlike larger galaxies it doesn't seem to be rotating at all, the stars are just buzzing around randomly. That's not at all unusual for dwarf galaxies. What is unusual is that the stars only appear to be moving at around ~6 km/s, whereas for an object this size, ~20 km/s might be expected. Taken at face value, this would mean that Antila 2's dark matter halo has the lowest density of any such halo. So how could the stars end up being so dang lazy ? Is it a super-extreme object or did it start life as something more normal and have lethargy thrust upon it ?
There are several possibilities. One is that maybe the shape of the dark matter isn't typical. The usual assumption, based on models, is that dark matter halos have a central "cusp" (a horrible term we just have to live with), meaning a rapid increase in density in the centre. Antila 2 might instead have a "core" - a flatter density distribution in the centre. This could happen in two ways : 1) Early feedback (explosions and winds) by young stars could have removed so much gas that the sheer mass of the moving material could have disrupted the dark matter by its gravitational influence; 2) A tidal encounter with another galaxy (i.e. the Milky Way) could have stripped away much of its dark matter. In either case the end result is that there wouldn't be so much extra mass to accelerate the stars. Any stars which were moving too quickly would have been removed, and the pathetic remnant of the dark matter halo would only have been massive enough to hold on to the most sluggish.
The authors test these scenarios. Neither seems to work by itself, but together they might be able to do it. Thanks to the proper motion data of Gaia, they're able to work out the orbit of the galaxy so they can find out how close it's come to the Milky Way and thus they can estimate the tidal forces. Their initial conditions are necessarily a bit speculative but based on more typical dwarf galaxies. What seems to work is an initially cored dwarf (presumably formed via feedback) that then has a few disruptive orbits around the Milky Way.
There's some observational evidence to support this. Antila 2 appears to be stretched in its direction of motion, its chemical content appears unusual for its brightness (suggesting much of its original stellar content has been lost). On the other hand, the disruption ought to make the object more spherical than observed, but it's not certain if this is a crippling problem or not. Such an object would be able to survive for a few gigayears - long, but it probably implies it fell into the Milky Way's orbit much later than other satellites.
Overall, the conclusions are starkly different to the final sentence in the abstract saying this object may challenge the cold dark mater model, but that was the only inconsistency I spotted. They deserve a press release for this, I just hope it's as good as the paper. :)
http://adsabs.harvard.edu/abs/2018arXiv181104082T
Huge dwarfs or ghostly giants ?
Ultra-diffuse galaxies are enormous but have very few stars. That makes it particularly difficult to say whether their total mass is very high or very small with their stars being spread especially thin. What's especially annoying is that these things are found in large numbers, which has annoyed a lot of people who hoped they might be rare exceptions.
(For a longer introduction see https://astrorhysy.blogspot.com/2017/07/ultra-diffuse-galaxies-revenge-of-ghosts.html; couple of non-crucial missing images, will fix later)
Measuring the total mass directly is difficult, but it's relatively easy to compare them with "normal", brighter galaxies whose mass is more well-determined. In this paper, the authors compare the size vs. brightness relation of the UDGs. They find, unsurprisingly, that they're bigger and fainter than normal galaxies (duh !) but more interestingly they form a continuous relation with brighter galaxies : they aren't a distinctly different population. This contradicts previous studies which found that the size-luminosity relation didn't have much scatter. The authors argue that this isn't because anyone did anything wrong, but just because the previous studies wouldn't have been able to detect UDGs.
What this means for the mass of the UDGs is unclear. They also find that the structural properties of the UDGs and bright galaxies are different : the shape of the distribution of stars varies in a different way depending on their brightness. Even more confusingly, the UDGs appear to be different from both normal faint and bright galaxies. Which means the things are bloomin' complicated.
This paper is still under review and it's only a letter, but I think there are several parts here that could be explained a lot more clearly (especially the comparisons to normal galaxies). The two main ideas of UDGs have been either that they're basically low-mass galaxies that have been "inflated" by encounters with other galaxies, or that they formed exactly how they are and are as massive as other galaxies of comparable size (AFAIK, no-one has come up with a way for low-mass galaxies to form yet be so hugely extended from birth - the only way is for them to grow over time).
Unfortunately both scenarios could be compatible with the different relations. A dwarf galaxy that becomes much more extended might also be structurally affected in the process (detailed intro on galaxy structure : https://astrorhysy.blogspot.com/2017/11/the-dark-side-of-galaxy-evolution-ii.html). But a giant galaxy that's born with very few stars might also have a different light profile from a brighter object. It would have been nice if the UDGs had been clear outliers from the general trends rather than a continuous extension of existing relations, but the Universe isn't cooperating. On the other hand, these measurements give an extra constraint for anyone trying to simulate the formation of these ghostly irritants.
http://adsabs.harvard.edu/abs/2018arXiv181101962D
(For a longer introduction see https://astrorhysy.blogspot.com/2017/07/ultra-diffuse-galaxies-revenge-of-ghosts.html; couple of non-crucial missing images, will fix later)
Measuring the total mass directly is difficult, but it's relatively easy to compare them with "normal", brighter galaxies whose mass is more well-determined. In this paper, the authors compare the size vs. brightness relation of the UDGs. They find, unsurprisingly, that they're bigger and fainter than normal galaxies (duh !) but more interestingly they form a continuous relation with brighter galaxies : they aren't a distinctly different population. This contradicts previous studies which found that the size-luminosity relation didn't have much scatter. The authors argue that this isn't because anyone did anything wrong, but just because the previous studies wouldn't have been able to detect UDGs.
What this means for the mass of the UDGs is unclear. They also find that the structural properties of the UDGs and bright galaxies are different : the shape of the distribution of stars varies in a different way depending on their brightness. Even more confusingly, the UDGs appear to be different from both normal faint and bright galaxies. Which means the things are bloomin' complicated.
This paper is still under review and it's only a letter, but I think there are several parts here that could be explained a lot more clearly (especially the comparisons to normal galaxies). The two main ideas of UDGs have been either that they're basically low-mass galaxies that have been "inflated" by encounters with other galaxies, or that they formed exactly how they are and are as massive as other galaxies of comparable size (AFAIK, no-one has come up with a way for low-mass galaxies to form yet be so hugely extended from birth - the only way is for them to grow over time).
Unfortunately both scenarios could be compatible with the different relations. A dwarf galaxy that becomes much more extended might also be structurally affected in the process (detailed intro on galaxy structure : https://astrorhysy.blogspot.com/2017/11/the-dark-side-of-galaxy-evolution-ii.html). But a giant galaxy that's born with very few stars might also have a different light profile from a brighter object. It would have been nice if the UDGs had been clear outliers from the general trends rather than a continuous extension of existing relations, but the Universe isn't cooperating. On the other hand, these measurements give an extra constraint for anyone trying to simulate the formation of these ghostly irritants.
http://adsabs.harvard.edu/abs/2018arXiv181101962D
Monday, 5 November 2018
Missing dark matter in dwarf galaxies?
Of course, I would have gone for a different title :
Yo Dawg, Herd You Like Missing Matter, So We Stole Some Of Your Missing Matter So You Can Miss Matter From Your Missing Matter
Some time ago you may remember I was going on about ultra diffuse galaxies (UDGs). These ghostly systems are comparable in size to the Milky Way but 100-1000x fainter. And you might remember I posted something like the plot shown below :
This is the baryonic Tully-Fisher relation (TFR). It plots the total mass of stars and gas as a function of how fast a galaxy is rotating (which is usually a good proxy for total mass of dark matter).
In the plot you can see that normal galaxies, in blue, lie on a nice neat straight line. It's possible to derive this line analytically. The problem is that the analysis predicts a population of galaxies which don't sit on the same line, which haven't hitherto been found.
The red points here show UDGs where it was possible to measure their rotation speed. Clearly they don't lie on the normal TFR. So good news, right ? Not necessarily. Those velocity measurements are uncertain because it's hard to estimate the viewing angle we're looking at, which can strongly affect the estimated rotation velocity. We can only get a direct measurement of rotation if we're lucky enough to see the galaxy edge-on; if we see them face-on, we can't measure the rotation at all. The fainter the galaxy, the harder it is to estimate the viewing (inclination) angle and the less accurate the correction will be. Which is bad news for things as faint as UDGs. Nevertheless, at least some UDGs do appear to deviate from the usual TFR.
The black points are the optically dark hydrogen clouds I've been investigating in the Virgo cluster. Their velocity widths are more secure, though their possible origins are more complicated.
Much to my delight, today's paper by Oman et al. is all about objects with those strange deviations from the TFR in both directions. They phrase things a bit differently, largely talking about galaxy formation efficiency. In essence, galaxies with higher baryonic (stars and gas) masses than expected have apparently high formation efficiency, in the sense that a small dark matter halo has accumulated more gas and stars than usual. Galaxies with lower baryonic masses than expected have correspondingly lower formation efficiencies. Or if you prefer, you can talk about galaxies with higher or lower rotation velocities than expected, it doesn't really matter.
Slightly annoyingly, Oman et al. don't cite either the Lesiman et al. UDG paper (red points) or my own dark clouds (black points)*. On the positive side, they discuss other systems previously unknown to me that also deviate from the TFR, in both directions. And these galaxies are not especially weird in other ways : they have much more normal levels of surface brightness. So it isn't just weirdly extreme objects that deviate from the TFR - more normal galaxies can do it too. And that's very reassuring.
* More oddly, they don't even mention that famous galaxy without dark matter so I guess it's nothing personal. :P
What could explain the deviations ? If I understand them correctly, there's not much problem explaining objects of low formation efficiency (fast rotators). Those would just be objects where the gas and stars are very extended. But cases of high formation efficiency (slow rotators), they say, are not compatible with the standard model. In fact, although the model does predict stronger scatter in the TFR in this regime, it would actually have the opposite effect to what the observations indicate.
The standard model could be wrong, of course, but let's leave that one on the "maybe" pile for now. Other options they suggest are that the gas and stars may not probe the full dark matter halo so their measurements underestimate the maximum rotation speed (this is also possible for some cluster galaxies which have experienced extreme amounts of gas loss, leaving behind only a remnant core of gas in their central regions - http://adsabs.harvard.edu/abs/2013MNRAS.428..459T). But that doesn't seem to work here because they have full rotation curves, and they're flat. So even if the gas and stars were more extended, the measured rotation would be the same. Another option could be that there's less dark matter than expected in the central regions of the galaxies, but simulations show that effect is far too weak.
Could it simply be a measurement error ? The distance estimates seem secure. Could they galaxies have been stripped, like those in Virgo ? No, they're too isolated.
What about the viewing angle ? It's hard to be sure, but this is definitely their favoured option. The measured rotational velocities of the deviant galaxies are very small, ~20 km/s (the Milky Way is more like 250 km/s), and a change to 30 km/s would be enough, in at least one case, to bring them back into agreement with normal galaxies. It only needs a very small error to explain this. The same problem could affect low-efficiency, fast-rotating galaxies too. If their viewing angle is estimated to be too low, then this will exaggerate the calculated rotation speed.
As far as I know this is entirely plausible for the systems they discuss here. But what about the ones in the plot ? I'm more skeptical. I went through the UDGs manually, and dang it, at least some of them really look like we're viewing them close to edge-on, so their velocities should be accurate. And for the dark clouds the velocity width is a lower limit, so they can only be wider than plotted here, not narrower.
What's the answer ? Dunno. Sorry.
https://arxiv.org/abs/1601.01026
Yo Dawg, Herd You Like Missing Matter, So We Stole Some Of Your Missing Matter So You Can Miss Matter From Your Missing Matter
Some time ago you may remember I was going on about ultra diffuse galaxies (UDGs). These ghostly systems are comparable in size to the Milky Way but 100-1000x fainter. And you might remember I posted something like the plot shown below :
This is the baryonic Tully-Fisher relation (TFR). It plots the total mass of stars and gas as a function of how fast a galaxy is rotating (which is usually a good proxy for total mass of dark matter).
In the plot you can see that normal galaxies, in blue, lie on a nice neat straight line. It's possible to derive this line analytically. The problem is that the analysis predicts a population of galaxies which don't sit on the same line, which haven't hitherto been found.
The red points here show UDGs where it was possible to measure their rotation speed. Clearly they don't lie on the normal TFR. So good news, right ? Not necessarily. Those velocity measurements are uncertain because it's hard to estimate the viewing angle we're looking at, which can strongly affect the estimated rotation velocity. We can only get a direct measurement of rotation if we're lucky enough to see the galaxy edge-on; if we see them face-on, we can't measure the rotation at all. The fainter the galaxy, the harder it is to estimate the viewing (inclination) angle and the less accurate the correction will be. Which is bad news for things as faint as UDGs. Nevertheless, at least some UDGs do appear to deviate from the usual TFR.
The black points are the optically dark hydrogen clouds I've been investigating in the Virgo cluster. Their velocity widths are more secure, though their possible origins are more complicated.
Much to my delight, today's paper by Oman et al. is all about objects with those strange deviations from the TFR in both directions. They phrase things a bit differently, largely talking about galaxy formation efficiency. In essence, galaxies with higher baryonic (stars and gas) masses than expected have apparently high formation efficiency, in the sense that a small dark matter halo has accumulated more gas and stars than usual. Galaxies with lower baryonic masses than expected have correspondingly lower formation efficiencies. Or if you prefer, you can talk about galaxies with higher or lower rotation velocities than expected, it doesn't really matter.
Slightly annoyingly, Oman et al. don't cite either the Lesiman et al. UDG paper (red points) or my own dark clouds (black points)*. On the positive side, they discuss other systems previously unknown to me that also deviate from the TFR, in both directions. And these galaxies are not especially weird in other ways : they have much more normal levels of surface brightness. So it isn't just weirdly extreme objects that deviate from the TFR - more normal galaxies can do it too. And that's very reassuring.
* More oddly, they don't even mention that famous galaxy without dark matter so I guess it's nothing personal. :P
What could explain the deviations ? If I understand them correctly, there's not much problem explaining objects of low formation efficiency (fast rotators). Those would just be objects where the gas and stars are very extended. But cases of high formation efficiency (slow rotators), they say, are not compatible with the standard model. In fact, although the model does predict stronger scatter in the TFR in this regime, it would actually have the opposite effect to what the observations indicate.
The standard model could be wrong, of course, but let's leave that one on the "maybe" pile for now. Other options they suggest are that the gas and stars may not probe the full dark matter halo so their measurements underestimate the maximum rotation speed (this is also possible for some cluster galaxies which have experienced extreme amounts of gas loss, leaving behind only a remnant core of gas in their central regions - http://adsabs.harvard.edu/abs/2013MNRAS.428..459T). But that doesn't seem to work here because they have full rotation curves, and they're flat. So even if the gas and stars were more extended, the measured rotation would be the same. Another option could be that there's less dark matter than expected in the central regions of the galaxies, but simulations show that effect is far too weak.
Could it simply be a measurement error ? The distance estimates seem secure. Could they galaxies have been stripped, like those in Virgo ? No, they're too isolated.
What about the viewing angle ? It's hard to be sure, but this is definitely their favoured option. The measured rotational velocities of the deviant galaxies are very small, ~20 km/s (the Milky Way is more like 250 km/s), and a change to 30 km/s would be enough, in at least one case, to bring them back into agreement with normal galaxies. It only needs a very small error to explain this. The same problem could affect low-efficiency, fast-rotating galaxies too. If their viewing angle is estimated to be too low, then this will exaggerate the calculated rotation speed.
As far as I know this is entirely plausible for the systems they discuss here. But what about the ones in the plot ? I'm more skeptical. I went through the UDGs manually, and dang it, at least some of them really look like we're viewing them close to edge-on, so their velocities should be accurate. And for the dark clouds the velocity width is a lower limit, so they can only be wider than plotted here, not narrower.
What's the answer ? Dunno. Sorry.
https://arxiv.org/abs/1601.01026
Subscribe to:
Posts (Atom)
Giants in the deep
Here's a fun little paper about hunting the gassiest galaxies in the Universe. I have to admit that FAST is delivering some very impres...
-
Of course you can prove a negative. In one sense this can be the easiest thing in the world : your theory predicts something which doesn...
-
Why Philosophy Matters for Science : A Worked Example "Fox News host Chris Wallace pushed Republican presidential candidate to expand...
-
In the last batch of simulations, we dropped a long gas stream into the gravitational potential of a cluster to see if it would get torn...