I normally steer clear of papers on astrochemistry because even plain atomic hydrogen is complicated enough. I made an exception for this one, though, because a) it's short and b) it's about an ultra diffuse galaxy.
UDGs are hard to measure because they're big and faint. Even getting their total mass is still extremely difficult, so measuring their chemical composition isn't much fun. But these guys seem to have managed it, using the giant Keck telescope and a fancy Integrated Field Unit. Traditional spectroscopy - which gives you the chemical composition and velocity information - would give you measurements only at a single point or along a slit, but IFUs give you spectral information at every pixel. 3D optical data cubes, because science.
This also lets them measure the kinematics. This particular UDG has a velocity dispersion of 56 km/s. For comparison the Milky Way has a rotation speed of ~220 km/s. Speed is a good proxy for total mass (i.e. how much dark matter is present) but it also depends on where you make that measurement. For disc galaxies, we can use the gas to probe regions far outside the stellar disc. We can't do that for UDGs - at least not this one - so we can't really get a good estimate of its total mass (there are methods of extrapolating, but the authors didn't try any). What they do show, however, is that the mass of dark matter within the measurable region is much, much higher than for more typical galaxies. It's way off the usual relation. If I were less cautious, I'd say that indicates it might be a very massive object indeed, but the authors (probably wisely) don't comment.
Chemically the galaxy is odd too. It seems to have had a prolonged duration of star formation lasting about 10 Gyr (don't ask me how they measure this). Given all the expected supernovae, that should make it enriched in iron - but it isn't. In fact it's magnesium-iron ratio is way, way off, even compared to other UDGs.
How could this be ? It may depends on the formation of the galaxy and when the different types of supernovae exploded. Early supernovae (from short-lived massive stars) may have blasted most of the galaxy's gas out into intergalactic space. This could also remove part of the dark matter through a "gravity tractor" since the mass of the gas could be initially very high. Whatever gas was left would have been rich in magnesium. Then later on, supernovae from accreting material in binary star systems (which take much longer) would have exploded, but since the mass of the galaxy was now much less, most of their iron-rich ejecta would have escaped.
That's their best guess for now, at any rate. But we're still very ignorant of even the basic properties of UDGs. Other exotic possibilities like continuous accretion of gas might be possible; it's also hard to see how the supposed loss of dark matter can be reconciled with its apparently heavily dark matter-dominated nature. Further research is very definitely needed.
https://arxiv.org/abs/1901.08068
Sister blog of Physicists of the Caribbean. Shorter, more focused posts specialising in astronomy and data visualisation.
Tuesday, 29 January 2019
Faster, better, cheaper ?
The "faster better cheaper" idea probably suffered (I guess) from two related things :
1) A (not wholly unjustified) belief that "failure is not an option", though in fact failure is part and parcel of research, especially risky research.
2) While missions were relatively cheap, they still required serious levels of money. Certainly, if your $10 million mission explodes this is not as bad as if your $200 million mission dies a fiery death. But who's willing even to risk $10 million ? It's a heck of a lot of money and required time invested to tell people, "well, it might all just go kerblamo, but we'll see".
And a possible third point : the overall savings are hard to compare at the time. Only after a considerable period of implementation can you evaluate which method is more economical. And even if the savings were evident, this might not be terribly comforting if you were working on a project that exploded and all your work was wasted.
The article mentions cubesats at the end, which are cheap enough to dodge this scary-number threshold. I would cautiously add that, thanks to SpaceX's sequence of spectacular first-stage failures (whilst also successfully delivering the payloads to orbit, thus giving the best of both worlds), the mantra of failure as acceptable might be more acceptable. Won't work for manned missions but it might for robotic probes.
In an effort to correct the planetary science community’s impression and memory of FBC, here are some facts that changed my perception of the era.
-The Viking mission to Mars in 1976 cost $1.06 billion in real-year dollars and took 6 years to develop. The Pathfinder team was instructed to send a lander AND rover (Sojourner) to Mars in half the time and 1/14 the budget. They succeeded.
-In fact, the Pathfinder lander cost less than the life detection experiment on Viking ($220 million, inflation-adjusted). The Sojourner rover only cost $25 million.
-Leveraging new CCD detector technology, the Pathfinder team spent $7.4 million to develop a new camera while the Viking team spent $27.3 million (inflation-adjusted) on their 2 cameras (1 for each lander). New CCD detector technology allowed a significant camera mass reduction.
-All 16 FBC missions combined cost less than the Viking missions.
-Cassini required 15 years for development; combined, all 16 FBC missions took 7 years.
-Lunar Prospector, which developed very little new technology yet discovered water ice on the Moon, only cost $63 million.
An obvious counter-argument is that the FBC spacecraft may have been cheap, but they were also a lot less capable than larger spacecraft with many more instruments. True. Cost reduction happens in part by reducing capability, e.g., Pathfinder didn’t have an orbiter, and Viking did. However, FBC missions resulted in more scientific publications (a proxy for science return) per dollar spent than traditionally managed missions (Dillon and Madsen 2015). The ability of the FBC approach to increase the science return from finite funding is a missing yet critical part of the space community’s narrative about the Faster, Better, Cheaper era.
http://www.elizabethafrank.com/colliding-worlds/fbc
1) A (not wholly unjustified) belief that "failure is not an option", though in fact failure is part and parcel of research, especially risky research.
2) While missions were relatively cheap, they still required serious levels of money. Certainly, if your $10 million mission explodes this is not as bad as if your $200 million mission dies a fiery death. But who's willing even to risk $10 million ? It's a heck of a lot of money and required time invested to tell people, "well, it might all just go kerblamo, but we'll see".
And a possible third point : the overall savings are hard to compare at the time. Only after a considerable period of implementation can you evaluate which method is more economical. And even if the savings were evident, this might not be terribly comforting if you were working on a project that exploded and all your work was wasted.
The article mentions cubesats at the end, which are cheap enough to dodge this scary-number threshold. I would cautiously add that, thanks to SpaceX's sequence of spectacular first-stage failures (whilst also successfully delivering the payloads to orbit, thus giving the best of both worlds), the mantra of failure as acceptable might be more acceptable. Won't work for manned missions but it might for robotic probes.
In an effort to correct the planetary science community’s impression and memory of FBC, here are some facts that changed my perception of the era.
-The Viking mission to Mars in 1976 cost $1.06 billion in real-year dollars and took 6 years to develop. The Pathfinder team was instructed to send a lander AND rover (Sojourner) to Mars in half the time and 1/14 the budget. They succeeded.
-In fact, the Pathfinder lander cost less than the life detection experiment on Viking ($220 million, inflation-adjusted). The Sojourner rover only cost $25 million.
-Leveraging new CCD detector technology, the Pathfinder team spent $7.4 million to develop a new camera while the Viking team spent $27.3 million (inflation-adjusted) on their 2 cameras (1 for each lander). New CCD detector technology allowed a significant camera mass reduction.
-All 16 FBC missions combined cost less than the Viking missions.
-Cassini required 15 years for development; combined, all 16 FBC missions took 7 years.
-Lunar Prospector, which developed very little new technology yet discovered water ice on the Moon, only cost $63 million.
An obvious counter-argument is that the FBC spacecraft may have been cheap, but they were also a lot less capable than larger spacecraft with many more instruments. True. Cost reduction happens in part by reducing capability, e.g., Pathfinder didn’t have an orbiter, and Viking did. However, FBC missions resulted in more scientific publications (a proxy for science return) per dollar spent than traditionally managed missions (Dillon and Madsen 2015). The ability of the FBC approach to increase the science return from finite funding is a missing yet critical part of the space community’s narrative about the Faster, Better, Cheaper era.
http://www.elizabethafrank.com/colliding-worlds/fbc
SPACE NAZIS BECAUSE SPACE NAZIS
I dunno why it took 7 weeks to appear on arXiv after acceptance, but it did. A longer blog post replete with OH GOD SO MANY NAZI REFERENCES is available here. This is the "short and to the point" version.
This project is the result of the twisted mind of my former housemate. He found that there were competing claims for the minimum population you could use to start an interstellar colony. One said you'd need only about 150 people, the other said it was more like 14,000. Quite a disparity !
Surprisingly, there doesn't seem to be much in the literature about minimum viable population size (perhaps there is and we as astronomers simply aren't aware of it). So Frederic decided to write his own numerical, agent-based code based on available medical data. Virtual people, called agents, have various parameters - age, gender, fertility, etc., which can be tracked and altered within the simulation. The crucial bit is that the ancestry of the agents is also monitored. That means that the amount of inbreeding aboard ship can be both tracked and controlled.
The first paper examined the earlier claims, which both (somewhat arbitrarily) dealt with a 200 year-long voyage. Both of them succeed in still havinga population unlucky survivors aboard by the end of the simulation, but neither are doing very well. The smaller crew would be on the verge of extinction, and though the larger ship is doing okay, there's a worrying level of inbreeding. The main problem seems not to be the population size so much as it was the arbitrary and fixed rules for procreation, especially the restriction that only those aged 35-40 were allowed to breed. Perfectly healthy people were forbidden from breeding, and the strict two children per couple policy was found to be counter-productive. It prevents overpopulation, but it's much too aggressive.
The second paper attempted to redress the wrongs and find a better value for the minimum population required. By allowing the rules to be more flexible, e.g. 3 children per couple when the population becomes too low, and widening the allowed breeding age range, and trying hundreds of simulations with different starting populations, the minimum number of crew was found to be 98 (with any inbreeding forbidden). With that many inhabitants or more, the missions always succeed even accounting for random variations : they reach a stable population level of ~500 indefinitely. Strictly speaking it didn't establish the minimum stable population level, but it's clearly somewhere between 100 and 500.
It's technically possible to have a successful mission with as few as 32 occupants. But to guarantee success would require an incredibly strict breeding program, hence the Naziism. And with breeding ages of 32-40, we have a population of Nazi milfs.
In this third paper we estimate how much food the crew would need, since they'd have to grow their own rather than taking stores. This accounts for the height, weight, and activity levels of the crew, which vary (again with random variations) over their lifetimes. We found they'd need about half a square kilometre of farmland, so the size of the spaceship is comparable to a skyscraper. It doesn't really matter if the crew were all couch potatoes or Olympian athletes either. And this included a healthy, balanced diet with plenty of room for the animals... so yeah, space Nazi farmer milfs with cows. Seriously.
In a future paper we may attempt to estimate the water requirements, and possibly how the farmland would be affected by the recycling efficiency. Ultimately we'd like to establish the minimum mass of a colony ship, though we're under no illusions about the complexity of the task. Per ardua ad astra, and all that,
https://arxiv.org/abs/1901.09542
This project is the result of the twisted mind of my former housemate. He found that there were competing claims for the minimum population you could use to start an interstellar colony. One said you'd need only about 150 people, the other said it was more like 14,000. Quite a disparity !
Surprisingly, there doesn't seem to be much in the literature about minimum viable population size (perhaps there is and we as astronomers simply aren't aware of it). So Frederic decided to write his own numerical, agent-based code based on available medical data. Virtual people, called agents, have various parameters - age, gender, fertility, etc., which can be tracked and altered within the simulation. The crucial bit is that the ancestry of the agents is also monitored. That means that the amount of inbreeding aboard ship can be both tracked and controlled.
The first paper examined the earlier claims, which both (somewhat arbitrarily) dealt with a 200 year-long voyage. Both of them succeed in still having
The second paper attempted to redress the wrongs and find a better value for the minimum population required. By allowing the rules to be more flexible, e.g. 3 children per couple when the population becomes too low, and widening the allowed breeding age range, and trying hundreds of simulations with different starting populations, the minimum number of crew was found to be 98 (with any inbreeding forbidden). With that many inhabitants or more, the missions always succeed even accounting for random variations : they reach a stable population level of ~500 indefinitely. Strictly speaking it didn't establish the minimum stable population level, but it's clearly somewhere between 100 and 500.
It's technically possible to have a successful mission with as few as 32 occupants. But to guarantee success would require an incredibly strict breeding program, hence the Naziism. And with breeding ages of 32-40, we have a population of Nazi milfs.
In this third paper we estimate how much food the crew would need, since they'd have to grow their own rather than taking stores. This accounts for the height, weight, and activity levels of the crew, which vary (again with random variations) over their lifetimes. We found they'd need about half a square kilometre of farmland, so the size of the spaceship is comparable to a skyscraper. It doesn't really matter if the crew were all couch potatoes or Olympian athletes either. And this included a healthy, balanced diet with plenty of room for the animals... so yeah, space Nazi farmer milfs with cows. Seriously.
In a future paper we may attempt to estimate the water requirements, and possibly how the farmland would be affected by the recycling efficiency. Ultimately we'd like to establish the minimum mass of a colony ship, though we're under no illusions about the complexity of the task. Per ardua ad astra, and all that,
https://arxiv.org/abs/1901.09542
Monday, 28 January 2019
Is the replication crisis real ?
I'm a bit suspicious that any kind of "crisis" exists. While we can always improve on methods and statistics, the basic premise here that "lots of data => improbable events happening by chance" is not exactly obscure or difficult to guess. It's obvious as soon as you learn about Gaussian statistics or even earlier.
Suppose there are 100 ladies who cannot tell the difference between the tea, but take a guess after tasting all eight cups. There’s actually a 75.6 percent chance that at least one lady would luckily guess all of the orders correctly.
Now, if a scientist saw some lady with a surprising outcome of all correct cups and ran a statistical analysis for her with the same hypergeometric distribution above, then he might conclude that this lady had the ability to tell the difference between each cup. But this result isn’t reproducible. If the same lady did the experiment again she would very likely sort the cups wrongly – not getting as lucky as her first time – since she couldn’t really tell the difference between them.
This small example illustrates how scientists can “luckily” see interesting but spurious signals from a dataset. They may formulate hypotheses after these signals, then use the same dataset to draw the conclusions, claiming these signals are real. It may be a while before they discover that their conclusions are not reproducible. This problem is particularly common in big data analysis due to the large size of data, just by chance some spurious signals may “luckily” occur.
We do this in radio astronomy all the time. With >100 million data points per cube, the chance of getting at least one interesting-but-spurious detection is close to 1.0, especially when considering that the noise isn't perfectly Gaussian. We get around this by the simple process of doing repeat observations; I find it hard to believe that anyone is seriously unaware that correlation <> causation at this point. Charitably the article may be over-simplifying. While there are certainly plenty of weird, non-intuitive statistical effects at work, I don't believe "sheer size of data set" is causing anyone to panic.
https://theconversation.com/how-big-data-has-created-a-big-crisis-in-science-102835
Suppose there are 100 ladies who cannot tell the difference between the tea, but take a guess after tasting all eight cups. There’s actually a 75.6 percent chance that at least one lady would luckily guess all of the orders correctly.
Now, if a scientist saw some lady with a surprising outcome of all correct cups and ran a statistical analysis for her with the same hypergeometric distribution above, then he might conclude that this lady had the ability to tell the difference between each cup. But this result isn’t reproducible. If the same lady did the experiment again she would very likely sort the cups wrongly – not getting as lucky as her first time – since she couldn’t really tell the difference between them.
This small example illustrates how scientists can “luckily” see interesting but spurious signals from a dataset. They may formulate hypotheses after these signals, then use the same dataset to draw the conclusions, claiming these signals are real. It may be a while before they discover that their conclusions are not reproducible. This problem is particularly common in big data analysis due to the large size of data, just by chance some spurious signals may “luckily” occur.
We do this in radio astronomy all the time. With >100 million data points per cube, the chance of getting at least one interesting-but-spurious detection is close to 1.0, especially when considering that the noise isn't perfectly Gaussian. We get around this by the simple process of doing repeat observations; I find it hard to believe that anyone is seriously unaware that correlation <> causation at this point. Charitably the article may be over-simplifying. While there are certainly plenty of weird, non-intuitive statistical effects at work, I don't believe "sheer size of data set" is causing anyone to panic.
https://theconversation.com/how-big-data-has-created-a-big-crisis-in-science-102835
Tuesday, 22 January 2019
Yesterday's Jan Frič award ceremony, wherein I look presentable.
Yesterday's Jan Frič award ceremony, wherein I look unusually presentable.
We cordially invite all staff and guests of the Astronomical Institute ASCR to the ceremony of Jan Frič Premium for the year 2018.
The ceremony will take place in the library of the Astronomical Institute ASCR in Ondřejov on Monday, January 21st, from 13:00. Since 2009, Astronomical Institute grants the Premium of Jan Frič to young researchers of the Institute for their extraordinary results, which contribute to the international prestige of the Institute. The laureate for 2018 is the employee of the GPS department Rhys Taylor, Ph.D. He received the Premium for the study of the origin of dark extragalactic clouds of neutral hydrogen. He will deliver a lecture on this topic (in English), entitled, "Inert hydrogen clouds in the Virgo cluster : dark galaxies, tidal debris, or something else ?"
Abstract :
The Arecibo Galaxy Environment Survey is a large-area, blind neutral hydrogen survey, designed to observe the full range of galaxy environments without the optical biases of traditional surveys. As well as being a sensitive tracer of the effects of environment, HI can be used to detect features which are completely optically dark and cannot be detected by other methods. I will review some of the optically dark features detected so far by AGES, concentrating on a population of isolated, compact clouds in the Virgo cluster with high line widths. Such clouds are not easily explained as tidal debris owing to their high line width and isolation. One possibility is that they may be "dark galaxies", rotating HI discs embedded in dark matter halos but with a gas density too low to allow star formation. I will review the numerical work we have performed to test three proposed explanations for the clouds : tidal debris, dark galaxies, and pressure confined clouds prevented from dispersal by the pressure of the intracluster medium. 3D glasses will be provided to the audience.
To be honest I could have done without giving the seminar YET AGAIN after the last seminar campaign, but I could really say no... :)
We cordially invite all staff and guests of the Astronomical Institute ASCR to the ceremony of Jan Frič Premium for the year 2018.
The ceremony will take place in the library of the Astronomical Institute ASCR in Ondřejov on Monday, January 21st, from 13:00. Since 2009, Astronomical Institute grants the Premium of Jan Frič to young researchers of the Institute for their extraordinary results, which contribute to the international prestige of the Institute. The laureate for 2018 is the employee of the GPS department Rhys Taylor, Ph.D. He received the Premium for the study of the origin of dark extragalactic clouds of neutral hydrogen. He will deliver a lecture on this topic (in English), entitled, "Inert hydrogen clouds in the Virgo cluster : dark galaxies, tidal debris, or something else ?"
Abstract :
The Arecibo Galaxy Environment Survey is a large-area, blind neutral hydrogen survey, designed to observe the full range of galaxy environments without the optical biases of traditional surveys. As well as being a sensitive tracer of the effects of environment, HI can be used to detect features which are completely optically dark and cannot be detected by other methods. I will review some of the optically dark features detected so far by AGES, concentrating on a population of isolated, compact clouds in the Virgo cluster with high line widths. Such clouds are not easily explained as tidal debris owing to their high line width and isolation. One possibility is that they may be "dark galaxies", rotating HI discs embedded in dark matter halos but with a gas density too low to allow star formation. I will review the numerical work we have performed to test three proposed explanations for the clouds : tidal debris, dark galaxies, and pressure confined clouds prevented from dispersal by the pressure of the intracluster medium. 3D glasses will be provided to the audience.
To be honest I could have done without giving the seminar YET AGAIN after the last seminar campaign, but I could really say no... :)
Two's Company : A Second Galaxy Without Dark Matter
A short letter submitted to ApJ by the same team who brought you the first galaxy without dark matter. This second discovery is very similar to the first : it's in the same group of galaxies, at the same distance, has a low surface brightness, is very extended, and has a similar smooth and boring-looking morphology. Its all-important velocity dispersion is even lower than the first, at a mere 6 km/s compared to 8-10 km/s. Which is again consistent with its dynamics being completely dominated by its stellar mass, with little or no additional dark matter needed.
This is very strange indeed. During tidal encounters between galaxies, it's possible for the gravity to tear off enough material to form a brand new (low mass) galaxy without its own dark matter component - that's been known for ages. But such a process ought to be messy. It shouldn't be able to form big, smooth objects that move very slowly. There ought to be debris all over the place : stellar streams and other weird-looking structures. It just shouldn't be able to make anything that looks this damn boring. Well, not quite true : after a good long while things should settle down and most of the crazier stuff ought to disperse, but to form very smooth things like these galaxies should take a very, very long time indeed because their motions are so low. And galaxies produced by this mechanism are chemically different to other galaxies, whereas this one isn't.
Could this just be a normal (though faint) galaxy observed close to face-on where we wouldn't be able to detect any rotation ? It was possible with one object, but that becomes highly unlikely with two - where are all the faint edge-on galaxies, eh ? Similarly, while tidal encounters can act to strip away large amounts of dark matter from ordinary galaxies, it seems incredibly unlikely that we'd find two such objects in a group without the expected tidal debris.
The other weird feature of these objects is that they have a large number of globular clusters given their low stellar mass. What the connection might be with the lack of dark matter is anyone's guess, but it does make it even more likely that they're part of a distinct population rather than being weird but rare exotica. Having a second object changes the picture considerably, but we need many more objects to have any kind of statistical view.
What I'm a bit surprised at is that no-one is talking much about the other (rather large !) population of objects which are known to show unusually low velocity dispersion from their gas measurements : ultra-diffuse galaxies. I e-mailed van Dokkum about that because it seems like something that should be mentioned more in the literature somewhere, but after a couple of weeks I didn't get a response so I guess I never will.
https://arxiv.org/abs/1901.05973
This is very strange indeed. During tidal encounters between galaxies, it's possible for the gravity to tear off enough material to form a brand new (low mass) galaxy without its own dark matter component - that's been known for ages. But such a process ought to be messy. It shouldn't be able to form big, smooth objects that move very slowly. There ought to be debris all over the place : stellar streams and other weird-looking structures. It just shouldn't be able to make anything that looks this damn boring. Well, not quite true : after a good long while things should settle down and most of the crazier stuff ought to disperse, but to form very smooth things like these galaxies should take a very, very long time indeed because their motions are so low. And galaxies produced by this mechanism are chemically different to other galaxies, whereas this one isn't.
Could this just be a normal (though faint) galaxy observed close to face-on where we wouldn't be able to detect any rotation ? It was possible with one object, but that becomes highly unlikely with two - where are all the faint edge-on galaxies, eh ? Similarly, while tidal encounters can act to strip away large amounts of dark matter from ordinary galaxies, it seems incredibly unlikely that we'd find two such objects in a group without the expected tidal debris.
The other weird feature of these objects is that they have a large number of globular clusters given their low stellar mass. What the connection might be with the lack of dark matter is anyone's guess, but it does make it even more likely that they're part of a distinct population rather than being weird but rare exotica. Having a second object changes the picture considerably, but we need many more objects to have any kind of statistical view.
What I'm a bit surprised at is that no-one is talking much about the other (rather large !) population of objects which are known to show unusually low velocity dispersion from their gas measurements : ultra-diffuse galaxies. I e-mailed van Dokkum about that because it seems like something that should be mentioned more in the literature somewhere, but after a couple of weeks I didn't get a response so I guess I never will.
https://arxiv.org/abs/1901.05973
Tuesday, 15 January 2019
Missing Matter Still Even More Missing, Study Finds
Last year a galaxy that seemed to have no dark matter was doing the rounds because that's freakin' weird. Virtually all galaxies appear to be heavily mass dominated by dark matter : it's arguably the best way to define a galaxy as opposed to a giant star cluster. The whole mainstream basis of galaxy formation and evolutionary theory depends on dark matter as an integral feature. While there are some cases of dwarf galaxies formed by tidal encounters that don't have much (or any) dark matter, these tend to be still embedded in the debris associated with their formation. As far as I know there are no good cases of a dark matter free galaxy just sitting there minding its own business.
If such an object were to be found, it would raise awkward questions for the standard theories of galaxy evolution but make life even more difficult for the main alternative : modified gravity. Tidal encounters between galaxies can strip away dark matter, so it's at least possible to reduce the dark matter content in standard theories (but remove it completely ? I doubt it). For modified gravity, on the other hand, any two star systems of the same size and shape ought to have the same dynamics : gravity should work the same everywhere, more or less. A nice control test where one can compare similar objects is not so easy to find as you might think, but such systems have been found - and the results don't look good for modified gravity.
Then there is this galaxy, NGC1052-DF2. This is an ultra-diffuse galaxy, meaning it's very large but with few stars per unit area. That makes it difficult to measure how fast its stars are moving, which is what you need to work out its total mass. So previously astronomers used its globular clusters, which are much brighter and easier to measure (though there are only a few of them). They found a velocity dispersion of 8-10 km/s - so low that it's consistent with the galaxy having no dark matter at all.
This made a lot of people very upset. Claims were made that the distance measurements were wrong and that would mean the galaxy was perfectly normal, but then an independent team came along and said nope, the distance measurements are correct, this galaxy really is weird.
Still, having only 10 globular clusters has always raised concerns that the estimate of the velocity resolution is reliant on small number statistics. Other teams have questioned the rigour of the claim for such a low dispersion, though in my opinion the original van Dokkum claim always looked stronger. Now, two teams have used extremely powerful instruments to measure the velocity dispersion of the stars directly. Note that both papers are still under review.
The first paper was by an independent group and came out just before Christmas :
http://adsabs.harvard.edu/abs/2018arXiv181207345E
I read this, but didn't bother writing about it because I found it rather badly-worded : I could not easily extract the main point about just how massive the galaxy is supposed to be. Fortunately the new paper (by the original team) is much more clearly written and comments on the Emsellem work.
The bottom line is that this galaxy does indeed seem to have very little or possibly no dark matter whatsoever. This is in conflict with the Emsellem claim in two ways : first, Emsellem claimed that the velocity dispersion could be much higher (13-27 km/s), whereas this paper says it's 8 km/s (just as the original globular cluster measurements indicated); second, Emsellem claimed the galaxy is rotating (albeit slowly) whereas this paper finds no evidence of that. The authors comment directly on the disagreement , noting that they aren't able to explain it. The only hint is that this latest study has a much greater velocity resolution than the Emesllem paper so it should be more accurate. And their fitted velocity dispersion profiles do seem to match the data extremely well.
As for how well this galaxy does or doesn't fit with modified gravity, as usual there's the complication of the external field effect. In modified dynamics, the presence of nearby galaxy can change the velocity dispersion in a very different way to conventional theories. Based on this, the earlier prediction was that the dispersion should be 13 km/s. This is not consistent with the new results, and only just about consistent with the Emsellem range which is actually more favourable to the galaxy having some dark matter than the modified gravity prediction.
I would expect a great deal of back-and-forth on this issue. My money's on the original van Dokkum team. Though a very strange result, it does seem to stand up to scrutiny so far. Watch this space.
https://arxiv.org/abs/1901.03711
If such an object were to be found, it would raise awkward questions for the standard theories of galaxy evolution but make life even more difficult for the main alternative : modified gravity. Tidal encounters between galaxies can strip away dark matter, so it's at least possible to reduce the dark matter content in standard theories (but remove it completely ? I doubt it). For modified gravity, on the other hand, any two star systems of the same size and shape ought to have the same dynamics : gravity should work the same everywhere, more or less. A nice control test where one can compare similar objects is not so easy to find as you might think, but such systems have been found - and the results don't look good for modified gravity.
Then there is this galaxy, NGC1052-DF2. This is an ultra-diffuse galaxy, meaning it's very large but with few stars per unit area. That makes it difficult to measure how fast its stars are moving, which is what you need to work out its total mass. So previously astronomers used its globular clusters, which are much brighter and easier to measure (though there are only a few of them). They found a velocity dispersion of 8-10 km/s - so low that it's consistent with the galaxy having no dark matter at all.
This made a lot of people very upset. Claims were made that the distance measurements were wrong and that would mean the galaxy was perfectly normal, but then an independent team came along and said nope, the distance measurements are correct, this galaxy really is weird.
Still, having only 10 globular clusters has always raised concerns that the estimate of the velocity resolution is reliant on small number statistics. Other teams have questioned the rigour of the claim for such a low dispersion, though in my opinion the original van Dokkum claim always looked stronger. Now, two teams have used extremely powerful instruments to measure the velocity dispersion of the stars directly. Note that both papers are still under review.
The first paper was by an independent group and came out just before Christmas :
http://adsabs.harvard.edu/abs/2018arXiv181207345E
I read this, but didn't bother writing about it because I found it rather badly-worded : I could not easily extract the main point about just how massive the galaxy is supposed to be. Fortunately the new paper (by the original team) is much more clearly written and comments on the Emsellem work.
The bottom line is that this galaxy does indeed seem to have very little or possibly no dark matter whatsoever. This is in conflict with the Emsellem claim in two ways : first, Emsellem claimed that the velocity dispersion could be much higher (13-27 km/s), whereas this paper says it's 8 km/s (just as the original globular cluster measurements indicated); second, Emsellem claimed the galaxy is rotating (albeit slowly) whereas this paper finds no evidence of that. The authors comment directly on the disagreement , noting that they aren't able to explain it. The only hint is that this latest study has a much greater velocity resolution than the Emesllem paper so it should be more accurate. And their fitted velocity dispersion profiles do seem to match the data extremely well.
As for how well this galaxy does or doesn't fit with modified gravity, as usual there's the complication of the external field effect. In modified dynamics, the presence of nearby galaxy can change the velocity dispersion in a very different way to conventional theories. Based on this, the earlier prediction was that the dispersion should be 13 km/s. This is not consistent with the new results, and only just about consistent with the Emsellem range which is actually more favourable to the galaxy having some dark matter than the modified gravity prediction.
I would expect a great deal of back-and-forth on this issue. My money's on the original van Dokkum team. Though a very strange result, it does seem to stand up to scrutiny so far. Watch this space.
https://arxiv.org/abs/1901.03711
Friday, 11 January 2019
Who watches the watchers ?
Interesting. In general I like the idea of a more open review process. After acceptance, it would be helpful to see the referee reports to be able to track the changes to the paper (everyone forgets when the reviewers provide extremely helpful suggestions, while everyone remembers those times when the reviewer made the paper worse - yet both do occur). But posting the reviews of rejected papers ? That doesn't sit right : the point of rejecting a paper should be that there's no need for more discussion on it. Of course you can post whatever you want on a blog, but that doesn't mean you should : it will only attract more attention anyway.
The biggest change I would make to the review system would be to have a more clearly-defined set of guidelines as to what the reviewer can/should do, e.g. how much control they have compared to the authors. The amount of transparency should be explicit and up-front - different levels may be appropriate in different cases, particularly when public preprint services are used. I don't see a good underlying principle to follow; neither total transparency nor total opacity seem sensible to me. I favour an "if in doubt, accept" approach - rejection should only be used when the paper is fundamentally flawed.
https://neuroneurotic.net/2019/01/10/an-open-review-of-open-reviewing/
The biggest change I would make to the review system would be to have a more clearly-defined set of guidelines as to what the reviewer can/should do, e.g. how much control they have compared to the authors. The amount of transparency should be explicit and up-front - different levels may be appropriate in different cases, particularly when public preprint services are used. I don't see a good underlying principle to follow; neither total transparency nor total opacity seem sensible to me. I favour an "if in doubt, accept" approach - rejection should only be used when the paper is fundamentally flawed.
https://neuroneurotic.net/2019/01/10/an-open-review-of-open-reviewing/
Wednesday, 9 January 2019
Space dragons are officially serious science
I'm in a press release based on a Nature paper and I talk about space dragons. Because that's how I roll.
"As we rotated the data cube, we got our first glimpse of the structure that we've nicknamed Orion's Dragon," said Rhys Taylor, a scientist at the Astronomical Institute of the Czech Academy of Sciences and a consultant to the SOFIA team, in a press release. "A few people have said it looks like a sea horse or a pterodactyl, but it looks like a dragon to me."
A bit more of my own explanation and more images (including a VR video) can be found here.
http://astronomy.com/news/2019/01/orions-dragon-revealed-in-3d-by-nasas-airborne-observatory
"As we rotated the data cube, we got our first glimpse of the structure that we've nicknamed Orion's Dragon," said Rhys Taylor, a scientist at the Astronomical Institute of the Czech Academy of Sciences and a consultant to the SOFIA team, in a press release. "A few people have said it looks like a sea horse or a pterodactyl, but it looks like a dragon to me."
A bit more of my own explanation and more images (including a VR video) can be found here.
http://astronomy.com/news/2019/01/orions-dragon-revealed-in-3d-by-nasas-airborne-observatory
Monday, 7 January 2019
I am an artist now
The visitor's office now features a ginormous (~3x2 m) print of one my artsy-farsty data visualisation projects. Original with explanations here :
https://astrorhysy.blogspot.com/2018/06/h-one.html
The landscape is an intensity map of the hydrogen content of M33, where intensity generates height rather than colour (landscape colours are completely arbitrary and were added by someone else to make things prettier, and rightly so). The background colours are derived from the frequency and intensity of Milky Way hydrogen. They're a bit washed out in the final print version compared to the original, but they get the job done.
The original digital version of this image looks like this :
https://astrorhysy.blogspot.com/2018/06/h-one.html
The landscape is an intensity map of the hydrogen content of M33, where intensity generates height rather than colour (landscape colours are completely arbitrary and were added by someone else to make things prettier, and rightly so). The background colours are derived from the frequency and intensity of Milky Way hydrogen. They're a bit washed out in the final print version compared to the original, but they get the job done.
The original digital version of this image looks like this :
Subscribe to:
Posts (Atom)
Giants in the deep
Here's a fun little paper about hunting the gassiest galaxies in the Universe. I have to admit that FAST is delivering some very impres...
-
Of course you can prove a negative. In one sense this can be the easiest thing in the world : your theory predicts something which doesn...
-
Why Philosophy Matters for Science : A Worked Example "Fox News host Chris Wallace pushed Republican presidential candidate to expand...
-
In the last batch of simulations, we dropped a long gas stream into the gravitational potential of a cluster to see if it would get torn...