Sister blog of Physicists of the Caribbean. Shorter, more focused posts specialising in astronomy and data visualisation.

Wednesday 22 June 2016

Dragonfly 44 : a new class of galaxy ?

More on these ultra-faint galaxies, a field which continues to develop rapidly. These are galaxies of similar size to the Milky Way but with ~1000x fewer stars. Measuring their stellar masses is relatively easy, but to measure their total mass (i.e. their dark matter content) requires measurements of how fast they're rotating. Normally, the easiest way to do this is by measuring their gas, but these galaxies tend not to have any gas (with one or two oddball exceptions that are extremely gas rich, just to make life interesting).

The total mass matters because it has wide-ranging implications for cosmological models, which predict far more dwarf (low mass) galaxies than are observed. Recently I reported that the body of evidence - such few rotation measurements as were available - was swinging quite strongly in favour of these being dwarf galaxies. There still aren't anywhere near enough to solve the missing galaxy problem, but hey, at least they're not making things worse.

Except the paper link today challenges that. According to these authors, one of these ultra-diffuse galaxies has the same total mass as the Milky Way. Now, if this is true, and if most of these new discoveries are equally massive, then we've got a serious problem for galaxy formation models. There's no obvious reason why some galaxies (forming in the same environment) should accrete vastly different amounts of baryonic matter (i.e. gas which eventually forms stars) to each other. Oh, we could probably come up with a hand-waving argument to explain it, but rigorously testing these won't be easy.

On the other hand, this would probably pretty decisively rule out alternative theories of gravity like MOND, which has very successfully explained why galaxies follow a very specific trend in terms of rotation velocity and total baryonic mass. Normal theories have a tough time doing that. But these galaxies, and the possible dark galaxies I've been harping on about, seem to indicate that this neat relationship doesn't actually work after all.

Except after reading the paper it's not at all clear that the galaxy really is as massive as the authors claim :
"We emphasize that the total halo mass is an extrapolation of the measured mass by a factor of ∼ 100. A more robust and less model-dependent conclusion is that the dark matter mass within r = 4.6 kpc is similar to the dark matter mass of the Milky Way within the same radius"

Gravitational lensing studies might eventually solve this, but until then the only safe conclusion is that science is hard. Watch this space.
http://arxiv.org/abs/1606.06291

Tuesday 21 June 2016

Kaboom !


Couple more turbulent clouds. These ones are smaller than the last ones, same sort of turbulence but a bit more energetic in the one on the right. Since the resolution is higher than the previous sims, the small-scale turbulence doesn't decay so rapidly so they expand much more. One problem is that the simulation domain is rather limited so we can't tell exactly how large the structures end up. And though we can measure the velocity width, we can't yet measure the all-important signal to noise to tell us what we'd detect over time. That has to await the return of the resident FLASH expert.

Monday 20 June 2016

Wibbly-wobbly, cloudy-woudy


Another turbulent sphere, slightly different turbulence to the last one (don't remember what the difference is, exactly). On the a simple rotation from one particular frame, on the left is a time series. Ends up a bit smoother than the last one, but still does the same basic thing.

The resident FLASH expert thinks that the resolution of the simulations may not be high enough. The turbulent structures are only a few cells wide, so they quickly smooth themselves out. So we'll need to use higher resolution, and it's hard to predict what effect that will have. The smaller cloud (I'll upload that tomorrow), which has higher resolution by default, tears itself apart even though it should be even more tightly gravitationally bound. But the larger cloud is less dense and has a larger surface area, so experiences more resistance from the surrounding gas. Far too many variables to predict what will happen, the only way is to test it.

Attack of the Flying Snakes

So here it is, my sixth paper as full author, 27 pages of text, under construction for ~18 months, more than 200 simulations and with movies of all them. Detailed blog posts (it deserves two) will follow shortly, but here's the super-short version for lazy / marginally interested people.

There are some hydrogen clouds in the Virgo cluster without any stars. The nearest galaxies look undisturbed and show no signs of any extended hydrogen streams, and they're pretty far away from the clouds. Yet the most popular explanation is that the clouds are some form of "tidal debris", meaning that they were ripped out of galaxies as they passed close to each other. Generally speaking this is quite a sensible explanation : after all, the gas has got to come from somewhere.

The problem is that thanks to one or two previous simulations - which until now no-one had really bothered to check - this explanation has been used for almost all clouds, regardless of their properties. These particular clouds have high velocity widths, meaning they look like they're rotating. The tidal debris hypothesis is supposed to be able to explain this. Actually, our new set of simulations show that this is due to people over-interpreting the results. Our simulations are consistent with the previous ones, but show unequivocally that clouds with high velocity widths cannot possibly be explained as tidal debris.

We also tested the alternative hypothesis that the clouds could themselves be "dark galaxies" - rotating hydrogen discs embedded in dark matter halos. That scenario turns out to do a far, far better job of explaining the observations, and seems to tie in quite nicely with the newly-discovered "ultra diffuse galaxies" (which are very faint galaxies discovered in the Virgo cluster which do at least have some stars, just not very many).

Why do these stupid poxy gaseous anomalies matter ? Because "dark galaxies" were proposed to explain the missing satellite problem, the observation that there are far fewer small galaxies than predicted by simulations. This has been a major thorn in the side of cosmological models for the last 20 years or so.

Not that we should get carried away. We've shown that tidal debris definitely doesn't work, and dark galaxies do work. But a model which works is not the same as a model which is correct. Other explanations are possible and our simulations (like the previous ones) are missing a lot of important physics. The take-home message of the paper is that if you find a mysterious hydrogen cloud, hand-waving explanations about "tidal debris" are just not good enough.

More research is needed.
http://arxiv.org/abs/1606.05499

Friday 17 June 2016

No, really, it's supposed to do that

For once a simulation where we expect everything to disintegrate....


So there are these weird hydrogen clouds in the Virgo cluster without any stars. For years, people have been waving their hands and shouting, "tidal debris !", meaning they think they were ripped out of galaxies during close encounters. This is almost certainly the case for some of them, but, as I shall show in a forthcoming post, it cannot be true for all of them.

The weirdest clouds have line widths comparable to giant galaxies. A line width just means we've measured how fast the gas is moving within the galaxy along our line of sight. If we had very high resolution we'd know if the gas was rotating or not, but these clouds are too small to do that easily. So we just know how fast all their gas is moving along our line of sight. And it's too fast. You just can't produce structures like this in tidal encounters, and we'd expect the clouds to quickly explode if they were only composed of the observable gas - so quickly that it's not very likely we'd ever observe them.

But then Burkhart & Loeb came up with another idea : maybe the clouds are pressure confined by the intracluster medium. Even intragalactic space is not empty, and although it's very thin, the gas inside clusters is also very hot. So the pressure from this gas could prevent these clouds from flying apart.

Now, if the clouds high velocity width was just due to their temperature, that might work. The pressure from their heat could neatly balance the temperature of the external gas. The problem is that the temperature required (>100,000 K) is much too hot for the gas to remain neutral - it should be ionized. The only way out is if the line width arises from turbulent motions instead. But turbulence, by definition, is unstable.

So here's the first result of a little project to investigate how long these clouds could last if they were the turbulent-supported spheres proposed by Burkhart & Loeb. The answer ? Not long. As you can see (same sim in both gifs but with a different colour scheme) the clouds almost instantly disintegrate, but what you can't see is that they rapidly heat up. In about 50 million years all of the gas would be ionized, and they'd become undetectable in rather less than that (quantifying how much is a work in progress). That's far too short to explain how the clouds got so far away from the nearest galaxies, and it's difficult to see how they could even have formed in the first place.

But this is a teaser. More to come.

Thursday 16 June 2016

Getting away with it

It looks like I'm going to get away with calling this paper,"Attack of the Flying Snakes". I like this referee.

Sadly Robert Minchin's suggestion that we call it "Snakes on a Fundamental Plane" (https://en.wikipedia.org/wiki/Fundamental_plane_(elliptical_galaxies)) came too late to feasibly re-word the paper. Next time...

Monday 13 June 2016

Tropic Thunder

A more personal account on Arecibo (warts and all, to some extent), how you can help ensure its continued funding and why you should do so.

TLDR version of this longer post :
Arecibo is very far from being outdated, nor is it likely to be surpassed in the next decade or two. Arecibo is an extremely mature facility - rather than being obsolete, it's more capable now than it ever was before. It's had many upgrades since its construction in 1963; new discoveries are still resulting from the last one in 2004, and more upgrades could improve it still further. There is no other facility planned that could fully supersede Arecibo save perhaps the Square Kilometre Array, which is unlikely to be operational for the next 15 years (and if you're American and worry about these things, the US isn't playing much of a role in that). Even that will not necessarily reproduce, let alone exceed, all of Arecibo's capabilities. Arecibo requires a relatively modest amount of funding for a unique and diverse range of scientific outputs : from asteroids to aliens, to pulsars and planets, galaxies and... err... Goldenye....

The NSF are preparing to asses the long-term funding model of Arecibo; options range from sustaining the existing funding model right down to site closure. Although the decision isn't expected until sometime next year, the official public consultation period extends on June 23rd this year. So get your comments to them ASAP. A suggested generic "I love Arecibo !" message is included. Additionally or alternatively you can sign a poll (deadline June 26th) if you just want to make your support known but don't wish to commit to a specific funding plan.

While few people really expect Arecibo to close, this isn't an option that should be dismissed entirely. It's all too easy for funding agencies to conclude that Arecibo is a technological dinosaur that can't compete with new telescopes like the Chinese FAST or the SKA pathfinder telescopes. It's actually none of these things - it's capable of science which is simply impossible anywhere else. There's really no reason to think that its greatest days don't lie ahead of it.

Sunday 5 June 2016

The Dish Is Not Enough

I just want to quickly point out something about China's awesome new radio telescope - it's not as big as you might think. Yes, the dish is enormous, and yes, it's much bigger than the now-previous record holder. But sheer dish size isn't everything. While most normal reflecting dish telescopes like the GBT, Jodrell Bank, Parkes, etc. can use all of their collecting area to detect radio waves, this isn't the case for the giants.

Arecibo rarely uses all of the collecting area of its 305m reflector. That's only possible when it's pointing directly overhead, since the dish is far too big to move. Instead it only collects radio signals from an area roughly equivalent to a circle 225m in diameter. The advantage to this is that the instruments above the dish can be moved, allowing it to point anywhere in an approximately 40 degree swathe of the sky. Which, overall, is much more useful than a slightly bigger telescope that can only observe whatever happens to be directly overhead.

Arecibo's main reflector is spherical, so it doesn't focus the radio waves to a point. This is corrected using other, smaller dishes. FAST has a different strategy - the reflector is normally spherical, but can be pulled into the shape of a parabola (which directly focuses light to a point). This can be done for an area equivalent to about 300m across. The method is different but the principle is the same : it sacrifices collecting area for sky coverage. So although in terms of sensitivity and resolution it isn't really that much bigger than Arecibo after all, it will be able to survey an area ~80 degrees across, twice that available to Arecibo.

FAST will be a fantastic instrument when and if it comes to maturity. Giant radio telescopes are not things you just build, flick a switch and wait for the science to come pouring out. Size of the dish is just one limiting factor - it takes a very long time to reach that limit, because developing the instrumentation to do so is an uber-specialised task, never mind the fact that the construction of the dish determines what frequencies you can observe. Essentially each telescope is its own prototype.

For the 2012 portfolio review the NSF stupidly decided to "divest", i.e. stop, funding the GBT, on the grounds that similar capabilities are offered by the Effelsberg 100m telescope. It's not really a fair comparison; the GBT has a far more sophisticated design. Among other things, the receivers are mounted in such a way that they don't block the incoming radio waves at all. This makes for very clean data with no ugly artifacts that can make the images hard to interpret.

Similarly, Arecibo's frequency range is more three times greater than FASTs. Plausibly that could be extended considerably further, whereas it's not yet clear if FAST's dish-deforming method will work for the planned frequency range, let alone extending it (this gets more difficult at higher frequencies since the precision of the deformation must be greater). Arecibo has the world's most powerful radar transmitter for asteroid observations (one of only two such facilities in the world) - no transmitter of any kind is planned for FAST, and it studies atmospheric physics to boot.

None of this should detract from FAST. It's just that anyone thinking that Arecibo is about to become obsolete is woefully mistaken.


Back from the grave ?

I'd thought that the controversy over NGC 1052-DF2 and DF4 was at least partly settled by now, but this paper would have you believe ot...