Sister blog of Physicists of the Caribbean. Shorter, more focused posts specialising in astronomy and data visualisation.

Thursday, 27 March 2025

The Most Interesting Galaxies Are SMUDGES

Ultra Diffuse Galaxies remain a very hot topic in astronomy. You know the drill by now : great big fluffy things with hardly any stars and sometimes little or no dark matter, not really predicted in numerical simulations. I'm not going to recap them again because I've done this too many times, so I leave it as an exercise for the reader to search this blog and learn all about them. Get off yer lazy arses, people !

UDGs were first found in clusters but have since been found absolutely everywhere. Why clusters ? Well, because they're so faint, getting redshift (i.e. distance) measurements of them is extremely difficult. This means their exact numbers are fiendishly difficult to characterise : without distance you can't get size, which is one of their distinguishing properties – so without size you can't even count them. And if you can't count them, you can't really say much about them at all.

Getting distances in clusters, however, is much easier. There the distance to the whole structure is anyway known. The first studies found lots of UDG candidates in clusters but very few in control fields, so most of those are certainly cluster members rather than just being coincidentally aligned on the sky. Of course it's always possible that a small fraction (at the few percent level or less) weren't really in the cluster and therefore not truly UDGs, but statistically, the results were definitely reliable.

The SMUDGES project (Systematically Measuring UDGs) is a major effort to begin to overcome the limitation of relying on clusters for distance estimates*. In essence, they try to develop a similar procedure for clusters but which can be applied to all different environments. They want results which are at least statistically "good enough" to estimate the distance, even if there's some considerable margin of error. 

* The main alternative thus far has been gas measurements, which give you redshift without relying on the much fainter optical data. This, however, has its own issues.

This paper is mainly a catalogue, and to be honest I rarely bother reading catalogue papers. In fact I only read this one to see what low-level methods they used to do the size estimates, since we have some possible candidate UDGs of our own we want to check. But as it turned out, they also present some interesting science as well, so here it is.

Most of the paper is given to describing these methodologies and techniques. It's pretty dry but important stuff, and like with the first cluster-based studies, they can't be sure that absolutely every candidate they find is really a UDG. Actually these measurements are, inevitably, quite a lot less reliable than the cluster studies, but they're careful to state this and the results are still plenty good enough to identify interesting objects for further study.

One interesting selection effect they note early on is that studies of individual objects tend to overestimate their masses (compared to studies of whole populations), since these tend to be particularly big, bright, and prominent. This at least helps begin to explain why some division has arisen in the community regarding the nature of UDGs : the objects studied by different groups are similar only at a broad-brush level, and in detail they may have significant differences. That's not an explanatory bias that was obvious to me, but maybe it should have been. It seems perfectly sensible with hindsight, at any rate.

And, once again, this is another study where the authors resort to flagging dodgy objects by eye, in another example of how important it is to actually look at the data. The machines haven't replaced us yet.

I won't do a blow-by-blow description of their procedures this time, but their final catalogue comprises about 7,000 objects, which they supplement with spectroscopic data where available. One of the main topics they address is the big one : what exactly are UDGs ? Are they galaxies with normal, massive dark matter halos but few stars, or do they instead have weird dark matter distributions ?

They conclude... probably the former. But this is not to say that they are "failed Milky Way" galaxies that have just not formed many stars for some reason : at the upper end they're probably still a few times less massive than that, and at the lower end that might be more than a factor ten difference. So mostly dwarf galaxies, but with normal dark matter distributions and very few stars. They get mass estimates from a combination of counting the number of globular clusters, which correlates with the total halo mass in normal galaxies, and their own statistical method to estimate other galaxy properties (which I don't fully understand). 

These relations don't always work well, however, sometimes experiencing "catastrophic failure", by which they mean errors of an order of magnitude or more. Why this should be is impossible to say at this stage, but, intriguingly, might point to the dark matter distribution being indeed different in UDGs compared to normal galaxies, at least some of the time. Overall though this appears unlikely, because to make this work with the observed scaling relations, the dark matter would have to be more concentrated than expected, even though the stars are the exact opposite : much more spread out than usual.

Bottom line : they think UDGs are mainly dwarf galaxies (though a few may be giants), with normal dark matter contents but very poor star formation efficiency for whatever reason. I'm not so sure. They say the distribution of some parameters (e.g. stellar mass within a given radius) is the same for both UDGs and other galaxies but to me they look completely different; it doesn't help that the figure caption states two colours when there are clearly three actually used. What's going on here I don't know, but very possibly I've missed something crucial.

Of course this paper won't solve anything by itself, but it gives a good solid start for further investigations. As with the previous post, this is another example of how important it is to classify things in a homogenous way. At least one SMUDGES object is found within our own AGES survey fields, and was in fact known to much earlier studies. Sometimes what can look at first glance to be a normal object actually turns out to be something much more unusual, but it's only when you have good, solid criteria for classification that this becomes apparent. 

Which is all very good news for AGES. I suspect there are actually quite a lot more UDGs lurking in our data. All we need is a team of well-armed and angry postdocs to track them down... i.e. a great big healthy grant. Well, a man can dream.

Dey's Blue Blobs

Today's paper is more exciting than I can fully let on.

In the last few years there have been a handful of seemingly-innocuous discoveries in Virgo that don't quite fit the general trends for normal galaxies. They're very faint, very blue, metal-rich*, and some are incredibly gas-rich. The most convincing explanation thus far is that they're ram pressure dwarfs : not galaxies exactly, but bound systems of stars that formed from condensations of gas stripped by ram pressure

* Meaning they have lots of chemicals besides hydrogen, because astronomers have weird conventions like that.

The advantages of this explanation is that ram pressure is a high speed phenomenon, so could easily explain why the objects are so far from any candidate parent galaxies (tidal encounters can do this too, but usually require lower interaction velocities), as well as why they're so metal-rich. Primordial gas is basically nothing but hydrogen and helium, and to get complex chemistry you need multiple cycles of star formation, which makes it virtually certain that the gas here must have originated in galaxies. Why exactly they've only just started forming stars is unclear, though it's possible they do have older stellar populations which are just too faint to identify. And these things really are faint, with just a few thousand solar masses of stars... in comparison to the usual millions or billions expected in normal galaxies.

One of the main problems in understanding these objects has been the understandably crappy statistics. With only a half-dozen or so objects to work with, any conclusions about the objects as a population are necessarily suspect. That's where this paper comes in.

Finding such objects isn't at all easy. They're difficult to parameterise and tricky for algorithms to handle, so they opt for a visual search. And quite right too ! Humans are very, very good at this, as per my own work (which I'll get round to blogging soon). Having just one person run the search would risk biases and incompleteness, so they use a citizen science approach based on Galaxy Zoo

The result was a total of nearly 14,000 "blue blob"* candidates. But this is being extremely liberal, and many of these might just be fluff : noise or distant background objects or whatever. A more rigorous restriction in which at least three people had to identify each candidate independently reduces this to just 658. Further inspection by experts trimmed this to 34 objects – a still more than respectable improvement over previous studies. And while I previously berated them for claiming that the objects only exist in clusters without having looked elsewhere, this time they at least looked at Fornax as well as Virgo. Fornax is another cluster, but interestingly no candidates were found there.

* C'mon guys, this is the name we're going with ? Really ? Oh. Well, fine. Suit yourselves.

But they don't stop with the results of the search. They cross-correlate their results with HI gas measurements from ALFALFA and, yes, AGES (thanks for the citations, kindly people !), and also observe eight of them with the 10m-class Hobby-Eberly Telescope for spectroscopy of the ionised gas. This is extremely useful as it provides a robust way of verifying that these objects are indeed in the cluster and not just coincidentally aligned, and also shows the the gas in the objects is being affected by the star formation.

Let me cover the main conclusions before I get to why I'm so excited by this work. First, their findings are fully consistent with and support the idea that these are ram pressure features. Their spectroscopy confirms the high metallicity of the objects, comparable to tidal dwarfs – so they have indeed formed by material which was previously in galaxies. They avoid the very centre of the cluster (where they'd likely be rapidly destroyed) and are preferentially found where ram pressure is expected to be effective. 

There's also an interesting subdivision within these 34 candidates. 13 of these are "rank 1", meaning they are almost certainly Virgo cluster objects, whereas the others are "rank 2" and are likely to have some contamination by background galaxies. Most of the rank 2 objects follow the general trends in colour and magnitude as for normal galaxies, but the rank 1 are noticeably bluer. They're also forming stars at a higher than expected rate (though, interestingly, not if you account for their total stellar mass). So indeed these are galaxy-like but not at all typical of other galaxies : they are galaxian, not galaxies.

Now the fun stuff. They identify two supposedly optically dark clouds I found in Virgo way back when and have since based most of my career on, hence – exciting ! They do have optical counterparts after all, then. Actually, one these is relatively bright, and I suggested it as a possible counterpart back in 2016. But it wasn't convincing, and its dynamics didn't seem to match well at all. These days of course everyone is all about the weird dynamics, but back then this seemed like a pretty good reason to rule it out. Since then, our VLA data has independently confirmed the association of the stars and the gas, and Robert Minchin is writing that one up as a publication.

That object has about twenty times as much gas as stars. The second object is altogether fainter, having a thousand times or more gas than stars ! Even with our VLA data we couldn't spot this*, and I probably wouldn't even believe this claim if they didn't have the optical spectroscopy to support it. It looks likely that in this case we're witnessing the last gasp of star formation, right at the moment the gas dissolves completely into the cluster.

* The VLA data has much better resolution than the original Arecibo data, so it can localise the gas with much greater accuracy and precision. This means that it can show exactly where the HI is really located, so if there's even a really pathetic optical counterpart there, we can be confident of identifying it. But of course, that counterpart must be at least visible in the optical data to begin with.

While they comment directly on two of our objects, they actually implicitly include another three measurements in the table. We never identified these as being especially weird; they just look like faint blue galaxies but nothing terribly strange. And that really underscores the importance of having enough resources to dedicate to analysing areas in detail, which, frankly, we don't. It also shows how important it is to quantify things : visual examination is great for finding stuff, but it can't tell you if an object is a weird outlier from a specific trend. Even more excitingly, almost certainly it means that there are a lot more interesting objects in our data that have already been found but not yet recognised as important.

But the most fun part came from doing a routine check. Whenever anyone publishes anything about weird objects in our survey fields, I have a quick look to see if they're in our data and we missed them, just in case. Every once in a while something turns up. This is very rare, but the checks are easy so it's worth doing. And this time... one of the other blue blobs has an HI detection in our data we previously missed.

Which is very cool. The detection is convincing, but there are very good reasons why we initially missed it. But I don't want to say anything more about it yet, because this could well become a publication for my PhD student. Watch this space.

Sunday, 2 March 2025

Taking galaxies off life support

Very long-term readers may remember my anguished efforts (almost a decade ago) to build a stable disc galaxy. Sweet summer child that I was, I began by trying to set up the simulations to just have gas or stars, but no dark matter. I thought – understandably enough – that adding more components would just make things more complicated, so best to start simple. I was planning to gradually ramp up the complexity so I could get a feel for how simulations worked, eventually ending up with a realistic galaxy that would sit there quietly rotating and not hurting anyone.

That wasn't what I got. Instead of a nice happy galaxy I got a series of exploding rings instead. Had that been a real galaxy, millions of civilisations would have been flung off into the void.

It turns out that dark matter really is frightfully necessary when it comes to keeping galaxies stable. Dark matter is a galaxy's emotional support particle, preventing it from literally flying apart whenever it has a mild gravitational crisis. Stable discs are easy when you have enough dark mass to hold them together.

(Of course, this is only true in standard Newtonian gravity. Muck about with this and you can make things work without dark any matter at all, but I'm not going there today.)

You don't always need dark matter to keep things together though. Plenty of systems manage just fine without it, like planetary systems and star clusters. But it's come as a big surprise to find that there are in fact quite large numbers of galaxies which have little or no dark matter, a result which is now reasonably (and I stress that this is an ongoing controversy) confirmed. We always knew there'd be a few such oddballs, if only from galaxies formed from the debris of other galaxies as they interact. But nobody thought there'd be large numbers of them existing in isolation. So what's going on ?

Enter today's paper. This is one in a short series which to be quite honest I'd completely forgotten about, partially because the authors forgot to give the galaxy a catchy nickname. Seriously, they could learn a lot from those guys who decided to name their galaxy Hedgehog for no particular reason. I'm only half-joking here : memorable names matter !

But anyway, this was an example of a UDG with lots of gas that appeared to have no dark matter at all. I wasn't fully convinced by their estimated inclination angle though, for which even a small error can change the estimated rotation speed and thus the inferred dark matter content substantially. A independent follow-up paper by another team ran numerical simulations and found that such an object would quickly tear itself to bits, whereas if if was just a regular galaxy with a very modest inclination angle error then everything would be fine. And there have been many other such studies of different individual objects, all of them mired in similar controversies. 

Since then, however, I've become much more keen on the idea that actually, a lot of these UDGs really do have a deficit or even total lack of dark matter after all. The main reason being this paper, which is highly under-cited in my view. Now it's entirely plausible that any one object might have its inclination angle measured inaccurately*. But they showed that the inclination-corrected rotation velocity of the population as a whole shows no evidence of any bias in inclination. Low inclinations, high inclinations, all can give fast or slow rotating galaxies, consistent with random errors. That some show a very significant lower than expected rotation therefore seems very much more likely to a a real effect and not the result of any systematic bias.

*Though all of these terms like "bias", "errors" and "inaccuracies" are, by the way, somewhat misleading. It's not that the authors did a bad job, it's that the data itself does not permit greater precision. That is, it allows for a range of inclination angles, some of which lead to more interesting results than others. The actual measurements performed are perfectly fine.

What about that original galaxy though ? AGC 11405 might itself still have had a measurement problem. Here the original authors return to redress the balance.

It seems that in the interim I missed one of their other observational papers which changes the estimates of exactly how much dark matter the galaxy should have; probably this is lost somewhere in my extensive reading list. The earlier simulation paper found that the object could be stable only (if at all) with a rather contrived, carefully fine-tuned configuration of dark matter, and there wasn't any reason to expect such a halo to form naturally. Couple that with the findings that it could easily be a normal galaxy if the inclination angle was just a bit off, and that made the idea of this particular object seem implausible, even if a population of other such objects did exist.

But that interim paper changes things. Whereas previously they used the gas of the object to estimate the inclination angle, now they got sufficiently sensitive optical data to measure it from the stars, and that confirms their original finding independently. They also improved their measurements of the kinematics from the gas, finding that it's rotating a bit more quickly than their original estimates, meaning it has a little bit more scope for dark matter. More significantly, the same correction found that the random motions are considerably higher than they first estimated.

What this means is that the dark matter halo can be a bit more massive than they first thought, and the disc of the galaxy doesn't have to be so thin. A thick disc with more random motions isn't so hard to keep stable because it's fine if things wander around a bit. So they do their own simulations to account for this, with the bulk of the paper given to describing (in considerable detail) the technicalities of how this was done.

They find that an object with these new parameters can indeed be stable. Rather satisfyingly, they also run simulations using the earlier parameters, as the other team already did independently. And they confirm that with that setup, the galaxy wouldn't be stable at all. So the modelling is likely sound, it's just that it depends quite strongly on the exact parameters of the galaxy. They confirm this still further with analytic formulae for estimating stability, showing that the new measurements of the rotation and dispersion are, once again, predicted to be stable.

But if the galaxy actually does have a hefty dark matter halo after all, doesn't that mean it's just like every other galaxy and therefore not interesting ? No. As far as I can tell, the amount of dark matter is still significantly less than expected, but also its concentration (essentially its density) is far lower : a 10 sigma outlier ! So yes, it's still really, really weird, with the implied distribution of dark matter still apparently very contrived and unnatural.

So how could such a galaxy form ? That's the fun part. It's important to remember that just because dark matter doesn't interact with normal matter except through gravity, this is not at all the same as saying it doesn't interact at all ! So some processes you'd think couldn't possible affect dark matter... probably can*. Like star formation, for instance. Young, massive stars tend to have strong winds and also like to explode, which can move huge amounts of gas around very rapidly. It's been suggested, quite plausibly, that this is what's responsible for destroying the central dark matter spikes which are predicted in simulations but don't seem to be the case in reality. The mass of the gas being removed wouldn't necessarily be enough to drag much dark matter along with it, but it could give it a sufficient yank to disrupt the central spike.

* And it's also worth remembering that just because dark matter dominates overall, this isn't at all true locally. This means that movement of the normal baryonic matter can't always be neglected. 

The problem for this explanation here is that the star formation density must be extremely low to get objects this faint. So whether there were ever enough explosively windy stars to have a significant effect isn't clear. Quantifying this would be difficult, especially because dwarf galaxies are much more dominated by their dark matter than normal galaxies – yes, they'd be more susceptible to the effects of massive stars because they're less massive overall, but the effect on the dark matter might not necessarily be so pronounced.

The authors here favour a more exotic and exciting interpretation : self interacting dark matter. The most common suggestion is self-annihilating dark matter that's its own anti-particle, which would naturally lead to those density spikes disappearing. There could be other forms of interaction that might also "thermalize" the spike... but of course, this is very speculative. It's an intriguing and important bit of speculation, to be sure : that we can use galaxies to infer knowledge of the properties of dark matter beyond its mere existence is a tantalising prospect ! But to properly answer this would take many more studies. It could well be correct, but I think right now we just don't have enough details of star formation to rule anything out. Continuing to establish the existence of this whole unuspected population of dark matter-deficient galaxies is enough, for now, to be its own reward.

Wednesday, 19 February 2025

Nobody Ram Pressure Strips A Dwarf !

Very attentive readers may remember a paper from 2022 claiming, with considerable and extensive justification, to have detected a new class of galaxian object : the ram pressure dwarf. These are similar to the much more well-known tidal dwarf galaxies, which form when gravitational encounters remove so much gas from galaxies that the stripped material condenses into a brand new object. Ram pressure dwarfs would be essentially similar, but result from ram pressure stripping instead of tidal encounters. A small but increasing number of objects in Virgo seem to fit the bill for this quite nicely, as they don't match the scaling relations for normal galaxies very well at all.

This makes today's paper, from 2024, a little late to the party. Here the authors are also claiming to have discovered a new class of object, which they call a, err... ram pressure dwarf. From simulations.

I can't very well report this one without putting my sarcastic hat on. So you discovered the same type of object but two years later and only in a simulation eh ? I see. And you didn't cite the earlier papers either ? Oh.

And I also have to point out an extremely blatant "note to self" that clearly got left in accidentally. On the very first page :

Among the ∼60 ram-pressure-stripped galaxies belonging to this sample, ionized gas powered by star formation has been detected (R: you can get ionized gas that is not a result of star formation as well, so maybe you could say how they have provided detailed information about the properties of the ionized gas, its dynamics, and star formation in the tails instead) in the tentacles.

No, that's not even the preprint. That's the full, final, published journal article !

Okay, that one made me giggle, and I sympathise. Actually I once couldn't be bothered to finish looking up the details of a reference so I put down "whatever the page numbers are" as a placeholder... but the typesetter fortunately picked up on this ! 

What does somewhat concern me at a (slightly) more serious level, though, is that this got through the publication process. Did the referee not notice this ? I seem to get picked up on routinely for the most minor points which frankly amount to no more than petty bitching, so it does feel a bit unfair when others aren't apparently having to endure the same level of scrutiny.

Right, sarcastic hat off. In a way, that this paper is a) late and b) only using simulations is advantageous. It seems that objects detected initially in observational data have been verified by theoretical studies fully independently of the original discoveries. That gives stronger confirmation that ram pressure dwarfs are indeed really a thing.

Mind you, I think everyone has long suspected in the back of their minds that ram pressure dwarfs could form. After all, why not ? If you remove enough gas, it stands to reason that sometimes part of it could become gravitationally self-bound. But it's only recently that we've had actual evidence that they exist, so having theoretical confirmation that they can form is important. That puts the interpretation of the observational data on much stronger footing.

Anyway, what the authors do here is to search one of the large, all-singing, all-dancing simulations for candidates where this would be likely. They begin by looking for so-called jellyfish galaxies, in which ram pressure is particularly strong so that the stripped gas forms distinct "tentacle" structures. They whittle down their sample to ensure they have no recent interactions with other galaxies, so that the gas loss should be purely due to ram pressure and not tidal encounters. Of the three galaxies in their sample which meet this criteria, they look for stellar and gaseous overdensities within their sample and find one good ram pressure dwarf candidate, which they present here.

By no means does this mean that such objects are rare. Their criteria for sample selection is deliberately strict so they can be extremely confident of what they've found. Quite likely there are many other candidates lurking in the data which they didn't find only because they had recent encounters with other galaxies, which would mean they weren't "purely" resulting from ram pressure. I use the quotes because determining which factor was mainly responsible for the gas loss can be extremely tricky. And simulation resolution limits mean there could be plenty of smaller candidates in there. The bottom line is that they've got only one candidate because they demand the quality of that candidate be truly outstanding, not because they're so rare as to be totally insignificant.

And that candidate does appear to be really excellent and irrefutable. It's a clear condensation of stars and gas at the end of the tentacle that survives for about a gigayear, with no sign of any tidal encounters being responsible for the gas stripping. It's got a total stellar mass of about ten million solar masses, about ten times as much gas, and no dark matter – the gas and stars are bound together by their own gravity alone. The only weird thing about it is the metallicity, which is extraordinarily large, but this appears to be an artifact of the simulations and doesn't indicate any fundamental problem.

In terms of the observational candidates, this one is similar in size but at least a hundred times more massive. Objects that small would, unfortunately, be simply unresolvable in the simulations because it doesn't have nearly enough particles. But this is consistent with this object being just the tip of a much more numerous iceberg of similar but smaller features. Dedicated higher resolution simulations might be able to make better comparisons with the observations, until someone finds a massive ram pressure dwarf in observational data.

I don't especially like this paper. It contains the phrase "it is important to note" no less than four times, it says "as mentioned previously" in relation to things never before mentioned, it describes the wrong panels in the figures, and it has many one-sentence "paragraphs" that make it feel like a BBC News article if the writer was unusually technically competent. But all of these quibbles are absolutely irrelevant to the science presented, which so far as I can tell is perfectly sound. As to the broader question of whether ram pressure dwarfs form a significant component of the galaxy population, and indeed how they manage to survive without dark matter in the hostile environment of a cluster... that will have to await further studies.

How To Starve Your Hedgehog

Today, two papers on hedgehogs quenched galaxies. It'll make more sense later on, but only slightly.

"Quenched" is just a bit of jargon meaning that galaxies have stopped forming stars, if not completely, then at least well below their usual level. There are a whole bunch of ways this can happen, but they all mostly relate to environment. Basically you need some mechanism to get the gas out of galaxies where it then disperses. In clusters this is especially easy because of ram pressure stripping, where the hot gas of the cluster itself can push out gas within galaxies. In smaller groups the main method would be tidal interactions, though this isn't as effective.

What about in isolation ? There things get tricky. Even the general field is not a totally empty environment : there are other galaxies present (just not very many) and external gas (just of very low density). But you also have to start to consider what might have happened to galaxies there over the whole of time, because conditions were radically different in the distant past.

To cut a long story short, what we find is that giant galaxies seem to have formed the bulk of their stars way back in a more exciting era when things were just getting started. Dwarf galaxies in the field, on the other hand, are still forming stars, and in fact their star formation rate has been more or less permanently constant.

This phenomena is called downsizing, and for a long time had everyone sorely puzzled : naively, giant galaxies ought to assemble more slowly, so were presumed to have taken longer to assemble their stellar population, whereas dwarfs should form more quickly. Simplifying, this was due to host of problems in the details of the physics of the models, and as far as I know it's generally all sorted out now. Small amounts of gas can, in fact, quite happily maintain a lower density for longer, hence dwarfs form stars more slowly but much more persistently.

Dwarfs are, of course, much more susceptible to environmental gas-loss removal processes than giants, and indeed dwarfs in clusters are mostly devoid of gas (except for recent arrivals). And so conversely, any dwarfs which have lost their gas in the field are unexpected, because there's nothing very much going on out there : all galaxies of about the same mass should have about the same level of star formation. There's no reason that some of them should have lost their gas and others held on to it - it should be an all-or-nothing affair.

That's why isolated quenched galaxies are interesting, then. On to the new results !


The first paper concentrates on a single example which they christen "Hedgehog", because "hedgehogs are small and solitary animals" and also presumably because "dw1322m2053" is boring, and cutesy acronyms are old hat. Wise people, I approve.

This particular hedgehog galaxy is quite nearby (2.4 Mpc) and extremely isolated, at least 1.7 Mpc from any nearby massive galaxies. That puts it at least four times further away than expected from the region of influence of any groups, based on their masses. It's a classic quenched galaxy, "red and dead", smooth and structureless, with no detectable star formation at all.

It's also very, very small. They estimate the stellar mass at around 100,000 solar masses, whereas for more typical dwarf galaxies you could add at least two or three zeros on to that. Now that does mean they can't say definitively if its lack of star formation is a really significant outlier, simply because for an object this small, you wouldn't expect much anyway. But in every respect it appears consistent with being a tiny quenched galaxy, so the chance that it has any significant level of star formation is remote.

How could this happen ? There are a few possibilities. While it's much further away from the massive groups than where you'd normally expect to see any effect from them, simulations have shown that it's just possible to get quenched galaxies this far out. But this is extraordinarily unlikely, given that they found this object serendipitously. They also expect these so-called "backsplash" galaxies (objects which have passed through a cluster and out the other side*) to be considerably larger than this one, because they would have formed stars for a prolonged time, right up until the point they fell into the cluster.

* I presume and hope this is a men's urinal reference.

Another option is simply that the star formation in small galaxies might be self-limiting, with stellar winds and supernovae able to eject the gas. This, they say, is only expected to be temporary (since most of the gas should fall back in after a while), so again the chances of finding something like this are pretty slim. But I'd have liked more details about this, since I would expect that for galaxies this small - and it really is tremendously small - the effects of feedback could be stronger than for more typical, more massive galaxies. Maybe stellar winds and explosions could permanently eject much more of the gas, although on the other hand galaxies this small would have fewer massive stars capable of this.

Similarly another possibility, which I don't think they mention, is quenching due to ram pressure in the field. Again, for normal dwarf galaxies, this is hardly a promising option. For ram pressure to work effectively, you need gas of reasonably high density and galaxies moving at significant speeds, neither of which happens in the field. But, studies have shown that galaxies in the field do experience (very) modest amounts of gas loss which correlates with the distance from the large-scale filaments. Ordinarily this is not really anything substantial, but for galaxies this small, it might be. Since a galaxy this small just won't have much gas to begin with, and removing it will be easy because it's such a lightweight, what would normally count as negligible gas loss might be fatal for a tiddler like this.

The most interesting option is reionisation. When the very first stars were formed, theory says, there were hardly any elements around except hydrogen and helium and a smattering of others. Heavier elements allow the gas to cool and therefore condense more efficiently, so today's stars are comparative minnows. But with none of this cooling possible, the earliest stars were monsters, perhaps thousands of times more massive than the Sun. They were so powerful that they reionised the gas throughout the Universe, heating it so that cooling was strongly suppressed, at least for a while. In more massive galaxies gravity eventually overcame this, but in the smallest galaxies it could be halted forever.

Hedgehog, the authors say, is right on the limit where quenching by reionisation is expected to be effective. If so then it's a probe of conditions in the very early universe, one which is extremely important as it's been used a lot to explain why we don't detect nearly as many dwarf galaxies as theory otherwise predicts*. The appealing thing about this explanation is the small size and mass of the object, which isn't predicted by other mechanisms.

* They do mention that the quenched fraction of galaxies in simulations rises considerably at lower masses, but how much of this is due to reionisation is unclear.

This galaxy isn't quite a singular example, but objects like this one are extremely rare. Of course ideally we'd need a larger sample, which is where the second paper comes in.


This one is a much more deliberate attempt to study quenched galaxies, though not necessarily isolated. What they're interested in is our old friends, Ultra Diffuse Galaxies, those surprisingly large but faint fluffy objects that often lack dark matter. In this paper the authors used optical spectroscopy to target a sample of 44 UDGs, not to measure their dynamics (the spectroscopic measurements are too imprecise for that) but to get their chemical composition. With this they can identify galaxies in a post-starburst phase, essentially just after star formation has stopped. That kind of sample should be ideal for identifying where and when galaxies get quenched.

I'm going to gloss over a lot of careful work they do to ensure their sample is useful and their measurements accurate. The sample size is necessarily small because UDGs are faint, and their own data finds that some of the distance estimates were wrong so a few candidates weren't actually UDGs after all. Their final result of 6 post-starburst UDGs doesn't sound much, and indeed it isn't, but these kinds of studies are still in their very early days and you have to start somewhere.

Even with the small size, they find two interesting results. First, the fraction of quenched UDGs is around 20%, much higher than the general field population. The stellar masses are a lot higher than the Hedgehog but still small compared to most dwarfs though, so this one needs to be treated with a bit of caution but it's definitely interesting. Second, while most quenched UDGs do appear to result from environmental effects, a few are indeed isolated. Which is a bit weird and unexpected. UDGs in clusters might form by gas loss of more "typical" galaxies, but this clearly can't work in the field, so why only a select few should lose gas isn't clear at all.


What all this points to isn't all that surprising, though in a somewhat perverse sense : it underscores that we don't fully understand the physics of star formation. The authors of the second study favour stellar feedback as being responsible for a temporary suppression of star formation. If this is common and repeated, with galaxies experiencing many periods of star formation interspersed with lulls, that could also make Hedgehog a bit less weird - if, say, it's forming/not forming stars for roughly the same total amount of time, then it wouldn't be so strange to detect it during a quenched phase. And of course the lower dark matter content of UDGs surely also has some role to play in this, although what that might be is anyone's guess.

As usual, more research is needed. At this point we just need more data, both observational and simulations. That we're still finding strange objects that're hard to explain isn't something to get pessimistic about though. We've learned a lot, but we're still figuring out just much further we have to go before we really understand these objects.

Monday, 17 February 2025

Sports Stars Can Save Humanity

I know, I know, I get far less than my proverbial five-a-day so far as reading papers goes. Let me try and make some small amends.

Today, a brief overview of a couple of visualisation papers I read while I was finishing off my own on FRELLED, plus a third which is somewhat tangentially related.


The first is a really comprehensive review of the state of astronomical visualisation tools in 2021. Okay, they say it isn't comprehensive, which is strictly speaking true, but that would be an outright impossible task. In terms of things at a product-level state, with useable interfaces, few bugs and plenty of documentation, this is probably as close as anyone can realistically get.

Why is a review needed ? Mainly because with the "digital tsunami" of data flooding our way, we need to know which tools already exist before we go about reinventing the wheel. As they say, there are data-rich but technique-poor astronomers and data-poor but technique-rich visualisation experts, so giving these groups a common frame of reference is a big help. And as they say, "science not communicated is science not done". The same is true for science ignored as well, of which I'm extremely guilty... you can see from the appallingly-low frequency of posts here how little time I manage to find for reading papers. 

So yeah, having everything all together in one place makes things very much easier. They suggest a dedicated keyword in papers "astrovis" to make everything easier to find. As far as I know this hasn't been adopted anywhere, but it's a good idea all the same.

Most of the paper is given to summarising the capabilities of assorted pieces of software, some of which I still need to check out properly (and yes, they include mine, so big brownie points to them for that !). But they've also thought very carefully about how to organise all this into a coherent whole. For them there are five basic categories for their selected tools : data wrangling (turning data into something suitable for general visualisation), exploration, feature identification, object reconstruction, and outreach. They also cover the lower-level capabilities (e.g. graph plotting, uncertainty visualisation, 2D/3D, interactivity) without getting bogged-down in unproductively pigeon-holing everything. 

Perhaps the best bit of pigeon-unholing is something they quote from another paper : the concept of explornation, an ugly but useful word meaning the combination of exploration and explanation. This, I think, has value. It's possible to do both independently, to go out looking at stuff and never getting any understanding of it at all, or conversely to try and interpret raw numerical data without ever actually looking at it. But how much more powerful is the combination ! Seeing can indeed be believing. The need for good visualisation tools is not only about making pretty pictures (although that is a perfectly worthwhile end in itself) but also in helping us understand and interpret data in different ways, every bit as much as developing new techniques for raw quantification. 

I also like the way they arrange things here because we too often tend to ignore tools developed for different purposes other than our own field of interest. And they're extraordinarily non-judgemental, both about individual tools and different techniques. From personal experience it's often difficult to remain so aloof, to avoid saying, "and we should all do it this way because it's just better". Occasionally this is true, but usually what's good for one person or research topic just isn't useful at all for others.

On the "person" front I also have to mention that people really do have radically different preferences for what they want out of their software. Some, inexplicably, genuinely want everything to do be done via text and code and nothing else, with only the end result being shown graphically. Far more, I suspect, don't like this. We want to do everything interactively, only using code when we need to do something unusual that has to be carefully customised. And for a long time astronomy tools have been dominated too much by the interface-free variety. The more that's done to invert the situation, the better, so far as I'm concerned.


The second paper presents a very unusual overlap between the world of astronomy and... professional athletes. I must admit this one languished in my reading list for quite a while because I didn't really understand what it was about from a quick glance at the abstract or text, mostly because of my own preconceptions : I was expecting it to be about evaluating the relative performance of different people at source-finding. Actually this is (almost) only tangential to the main thrust of the paper, though it's my own fault for misreading what they wrote.

Anyway, professional sports people train themselves and others by reviewing their behaviour using dedicated software tools. One of the relatively simple features that one of these (imaginatively named "SPORTSCODE") has is the ability to annotate videos. This means that those in training can go back over past events and see relevant features, e.g. an expert can point out exactly what and where something of interest happened – and thereby, one hopes, improve their own performance.

What the authors investigate is whether astronomers can use this same technique, even using the same code, to accomplish the same thing. If an expert marks on the position of a faint source in a data cube, can a non-expert go back and gain insight into how they made that identification ? Or indeed if they mark something they think is spurious, will that help train new observers ? The need for this, they say, is that ever-larger data volumes threaten to make training more difficult, so having some clear strategy for how to proceed would be nice. They also note that medical data, where the stakes are much, much higher, relies on visual extraction, while astronomical algorithms have traditionally been... not great. "Running different source finders on the same data set rarely generates the same set of candidates... at present, humans have pattern recognition and feature identification skills that exceed those of any automated approach."

Indeed. This is a sentiment I fully endorse, and I would advocate using as much visual extraction as possible. Nevertheless, my own tests have found that more modern software can approach visual performance in some limited cases, but a full write-up on that is awaiting the referee's verdict.

While this paper asks all the right questions, it presents only limited answers. I agree that it's an interesting question as to whether source finding is a largely inherent or learned (teachable) skill, but most of the paper is about the modifications they made to SPORTSCODE and its setup to make this useful. The actual result is a bit obvious : yes indeed, annotating features is useful for training, and subjectively this feels like a helpful thing to do. I mean... well yeah, but why would you expect it to be otherwise ? 

I was hoping for some actual quantification of how users perform before and after training – to my knowledge nobody has ever done this for astronomy. We muddle through training users as best we can, but we don't quantify which technique works best. That I would have found a lot more interesting. As it is, it's an interesting proof of concept, and it asks all the right questions, but the potential follow-up is obvious and likely much more interesting and productive. I also have to point out that FRELLED comes with all the tools they use for their training methods, without having to hack any professional athletes (or their code) to get them to impart their pedagogical secrets.


The final paper ties back into the question of whether humans can really outperform algorithms. I suppose I should note that these algorithms are indeed truly algorithms in the traditional, linear, procedural sense, and nothing at all to do with LLMs and the like (which are simply no good at source finding). What they try to do here is use the popular SoFiA extractor in combination with a convolutional neural network. SoFiA is a traditional algorithm, which for bright sources can give extremely reliable and complete catalogues, but it doesn't do so well for fainter sources. So to go deeper, the usual approach is to use a human to vet its initial catalogues to reject all the likely-spurious identifications.

The authors don't try to replace SoFiA with a neural network. Instead they use the network to replace this human vetting stage. Don't ask me how neural networks work but apparently they do. I have to say that while I think this is a clever and worthwhile idea, the paper itself leaves me with several key questions. Their definition of signal to noise appears contradictory, making it hard to know exactly how well they've done : it isn't clear to me if they're really used the integrated S/N (as they claim) or the peak S/N (as per their definition). The two numbers mean very different things. It doesn't help that the text is replete with superlatives, which did annoy me quite a bit.

The end result is clear enough though, at least at a qualitative level : this method definitely helps, but not as much as visual inspection. It's interesting to me that they say this can fundamentally only approach but not surpass humans. I would expect that a neural network could be trained on data containing (artificial) sources so faint a human wouldn't spot them, but knowing they were there, the program could be told when it found them and thereby learn their key features. If this isn't the case, then it's possible we've already hit a fundamental limit, that when humans start to dig into the noise, they're doing about as well as it's ever possible to do by any method. When you get to the faintest features we can find, there simply aren't any clear traits that distinguish signal from noise. Actually improving, in any significant way, on human vision, might be a matter of a radically different approach... but it might even be an  altogether hopeless challenge.

And that's nice, isn't it ? Cometh the robot uprising, we shall make ourselves useful by doing astronomical source-finding under the gentle tutelage of elite footballers. 

Or not, because that algorithms can be thousands of times faster can more than offset their lower reliability levels, but that's another story.

Phew ! Three papers down, several hundred more to go.

Saturday, 11 January 2025

Turns out it really was a death ray after all

Well, maybe.

Today, not a paper but an engineering report. Eh ? This is obviously not my speciality at all, in any way shape or form. In fact reading this only revealed to me even further the tremendous depths of my own ignorance regarding materials science and engineering practises. The former is something I never cared for at undergraduate level and the latter is something about which I know literally nothing. Naturally, I wouldn't normally even glance at a report like this, except that it's about a topic that's personally important to me : why Arecibo collapsed.

There's an okay-but-short press release version here. It's interesting to see the extent of the deconstruction at the site, which was already well advanced in 2021; I couldn't find a more recent photo. Otherwise the Gizmodo version is the 30-second read and not much else. For this post I read most of the full 113 page report, which really is "jaw dropping", at least in parts, as Gizmodo described it. Unsurprisingly there are fairly hefty tracts where my eyes glazed over, but there's still plenty in here that's accessible and understandable to non-engineers like me.

In a nutshell, Arecibo collapsed due to a combination of factors, two of which are predictable enough but the third is something nobody expected. The first two are inadequate maintenance and the impact of hurricane Maria. But it's important not to oversimplify, as these are intimately bound with the third : the effects of the radar transmitter. This is not quite a case where one can simply say, "if they'd just done their jobs properly then it'd still be standing today", thought the report does contain some damming stuff.

Going through this linearly would end up being a shorter version of the report, which wouldn't really help anyone. If you want that level of detail you should go through it yourself; it's thorough to the point of going back to hand-written notes from the earliest days of the telescope. I have to say, though, that it's also highly repetitive in parts and in my view somewhat self-contradictory in places – but as it says, this is a preprint and still subject to editorial revision. Anyway, rather than doing a blow-by-blow breakdown, let me extract some broader lessons here.


Safety is not the same as redundancy

Probably the most general lesson is, I presume, obvious to anyone with an engineering background. But to an outsider me the distinction between safety and redundancy was interesting just because it makes a lot of intuitive sense but I'd never heard of it before. Safety, apparently, refers to the breaking point of any particular element. For example a cable with a safety factor of two could support twice as much as its current load before it would snap. Redundancy, on the other hand, is about how many elements could fail before the whole structure would come crashing down. Arecibo's three towers, they say, don't provide redundancy because a single failure would inevitably mean a total collapse (compare with the six of FAST).

Of course it's very unlikely that even a single tower would ever fail because their safety factor was massive, so redundancy there was unnecessary (at least regarding any failure of the concrete towers themselves). The same can't be said for the metal cables, where the safety factor generally seems to have been about a factor two or a bit less, in accordance with standard design practises – still plenty, but with a need for redundancy just in case. The report stops short of saying that there were any actual design flaws in the telescope, but does not that it obviously would have been better if there had been more towers. Safety factors, they say, were not the issue, although I think I detect some inconsistency here. Where they do issue an outright criticism, for example, is that while the original cable system had redundancy, this was no longer true with the 1997 upgrade that added the 900-tonne Gregorian dome and altered the cable system. Which is a little bit in contradiction to their claims that the telescope didn't suffer from design faults. It's a bit muddled.


Poor maintenance contributed to but did not cause the collapse

At least this is the overall gist I got. There's plenty of criticism levelled here, but it's hard to disentangle how serious the maintenance problems really were. As I read it, a more diligent maintenance program probably could have prevented the collapse, but this is partly with the benefit of hindsight – the failures which occurred were unprecedented (see below) but should have been spotted all the same. Of particular concern is that there wasn't enough knowledge transfer during the telescope's two changes of management (I'll speak from first-hand experience in declaring that management changes should be avoided for a host of other reasons; I went through one such change at Arecibo and God knows what the staff must have felt like when a second took place only a few years later). In addition, and probably worst of all, is that the post-Hurricane Maria repair efforts were both much too slow, taking months to even get started and would have lasted for years, and targeting a cable which never failed. Major repairs needed to happen far sooner but there was also a need to identify the failures more accurately.

The failures were of the cable sockets rather than the cables themselves. In these "spelter sockets", there's normally some degree of cable pullout after construction is complete and the structure assumes its full load : these sockets are widely used so this is known to be absolutely normal and no cause for concern. But the report is somewhat ambiguous as to whether the extra pullout which happened could have been noticed. Sometimes it sounds quite damning in describing the extra movement as "clear" but elsewhere describes it as "not accessible by visual inspection". The amount of movement we're talking about, until the point of the collapse itself, was small, of the order 1cm or so. It certainly isn't something you'd spot from a casual glance, but you could measure it by hand easily enough with a ruler. Not noticing this, if I understand things correctly, meant that the cables were estimated to still have their original high safety factors whereas in fact they were much lower. They say this "should have raised the highest alarm level, requiring urgent action". Perhaps most damming, they also say that it is "highly unlikely" that this excessive pullout went unnoticed. They also note that there was a lack of good documentation of maintenance records and procedures.

The contribution of the recent hurricanes, especially Maria, was extremely significant in precipitating the collapse. In fact, "absent Maria, the Committee believes the telescope would still be standing today". Pullout from the sockets shortly after installation is entirely normal as the structure takes the weight, but after that, any further movement isn't normal at all. This did in fact happen, and should have been spotted – but even this, as we'll see, apparently wasn't enough to bring down the telescope by itself.

One final point is that Arecibo wasn't well liked by backend management. I often had the impression of a behind-the-scenes mood of delenda est Arecibo  or at the very least, that that was what some staff members sincerely believed was happening even if it wasn't true. The report notes that a 2006 NSF report recommended closing Arecibo by 2011 if other funding sources couldn't be found, which I found truly bizarre. This was less than ten years after a major upgrade and exactly at the point the biggest surveys were just beginning. As to why anyone would think that closing it at that particular moment was a good idea, I'm truly at a loss. Nothing about it, even with some familiarity of the large-than-life politics behind the place, has ever made a lick of sense to me. 

This is not, I hasten to add, any suggestion of deliberately shoddy maintenance; inasmuch as that was inadequate, there is no need to attribute that to anything besides an incompetently low budget. One strikingly simple recommendation in the report is that funding sources for site operations (e.g. science and development) and maintenance be entirely separate, so there is no chance of any conflict of interest or competition for resources which are essential to both.


The failure was unprecedented

The final and most interesting point of the report, the big headline message, is that Arecibo may have failed because of its radar transmitter. The report is emphatic, and repeats almost ad nauseum, that the kind of socket failures seen here have never before occurred in a century of operations of identical sockets used in bridges and other structures around the world. The damage from the hurricanes was significant, but not enough by itself to explain the failure. There is a crucial missing factor here.

The explanation suggested in the report is electroplasticity. In laboratory conditions, material creep (stretching) can be induced by electrical currents, apparently directly because of the energy released by the flow of electrons. As they note, in the lab this has been found under much higher currents operating for much shorter times, but could presumably work at lower currents if sustained for much longer periods. If correct, this would be Arecibo's final first, another effect of its unique nature. Such currents, they hypothesise, would have been induced by the powerful 1MW radar transmitter used for zapping asteroids and other Solar System objects. This would explain why the cables failed while still having apparently high safety factors, and possibly account for why the failures occurred in some of the youngest cables with no evidence of manufacturing defects (and weren't even the ones with the highest load). It would also, of course, explain why no such other socket failures have ever been seen. Hardly anything else has this combination of radar transmitter and spelter sockets, let alone in tropical conditions in an earthquake zone.

The report goes quite deep into the technical details of electroplasticity. Interestingly, it notes that even less powerful sources can induce currents in human skin that can be directly sensed a few hundred feet from the transmitter. The problem is that understanding the effects of these currently requires highly detailed simulations accounting for the complicated structure of Arecibo's cables, the exact path the current would follow, and using data on low, long-term currents that at present doesn't exist. The most obvious deficiency seems to me that they don't estimate just how long the radar was ever transmitting for. Sure, it was up there for decades, but it wasn't used routinely : regularly, to be sure, but not daily. This is something where a crude estimate should be relatively easy by searching the observing records; even the schedule of what was planned (which didn't always match what was actually done, usually because the err, radar broke) would give a rough indication.




If the report is correct, then there's little need for concern about other structures. The report strongly disagrees that Arecibo points to a need to revise safety standards for spelter sockets more generally; unless your bridge is in the path of 1MW S-band radar transmitter, you can carry on with your morning commute as usual. Well, that's good. Clearly, regardless of electroplasticity, something happened here that was truly exceptional, and not worth worrying about whether it will ever be repeated. Not unless you're an engineer, at any rate.

Whether electroplasticity really was the cause I'm not qualified to judge. Talking to someone older and wiser, the opinion was "they had to come up with something". I don't disagree with that – there just isn't enough data here to say anything for certain. It could be electroplasticity, or it could be something the committee just didn't think of. More analysis of the surviving hardware, along with more studies and simulations, is badly needed.

The broader lesson I would take from all this is that you can run things on a shoestring for a while, but you can't keep trying to do less with more indefinitely. Yes, I'm coloured by my political biases, but austerity to survive a short-term hit is very different to austerity as a way of life : one is manageable, the other isn't. Such a policy does far more harm than good. Yes, you save a little money immediately, but you ultimately lose an awful lot more a little further down the line. So if you're going to fund things, fund things as properly as you can. Incorporate redundancy thinking into managerial practises as well as engineering standards. Have teams large enough to survive the loss of several members. Hire separate observing support staff rather than expecting scientists to do everything. 

Finally, don't expect people to work for meagre compensation (and I'm here not thinking just financially but in other benefits, for example high pay is useless with long hours and/or low holiday time) just because they enjoy their job. Not even the most wildly enthusiastic, energy-driven fanatic can operate at 110% for long. Just because someone is uncomfortable doesn't mean they're working extra-hard. Part of America's puritan hangover appears to be in thinking that work = bad => people who are suffering are good workers. In the end this just leads to everyone hating their job are wanting to overthrow the system but having no clue what to replace it with. Far better to reverse the thinking and presume that those who are happy and comfortable are the best workers. 

This has taken a rather political turn, but it's not unmotivated by my experiences at Arecibo. One notorious manager was definitely of the ilk who believe that more work good, less work bad. Thankfully this is not a mentality I've encountered much in Europe. And, in my view, understanding this isn't just good for us as people, but actually as scientists in getting the work done we want to do. By all means, take a liberal approach : let those who want to work obsessively, who actively thrive because of it, do so, but don't presume the same conditions produce the same results from different people. They don't. As with good software interface design, in the end, solving these issues is just as important for the science we want to do as the scientific problems themselves. Soft issues produce hard results.

Monday, 5 August 2024

Giants in the deep

Here's a fun little paper about hunting the gassiest galaxies in the Universe.

I have to admit that FAST is delivering some very impressive results. Not only is it finding thousands upon thousands of galaxies – not so long ago, the 30,000 HI detections of ALFALFA was leagues ahead of everything else, this has already been surpassed – but in terms of data quality too it looks like it's delivering. This paper exploits that to the extreme with the FAST Ultra Deep Survey, FUDS.

Statistically, big, all-sky surveys are undeniably the most useful. With a data set like that, anyone can search the catalogue and see if anything was detected at any point of interest, and at the very least they can get an upper limit of the gas content of whatever they're interested in. Homogeneity has value too. But of course, with any new telescope you can always go deeper, as long as you're prepared to put in the observing time. That can let you find ever-fainter nearby sources, or potentially sources at greater distances. Or indeed both.

It's the distance option being explored in this first FUDS paper. Like previous ultra-deep surveys from other telescopes, FUDS tales a pencil-beam approach : incredibly sensitive but only over very small areas. Specifically it's about 12 times more sensitive than AGES but in an area almost 50 times smaller (or, if you prefer, 44 times more sensitive than ALFALFA but in an area 1,620 times smaller). This paper looks at their first of six 0.72 square degree fields, concentrating on the HI detections at redshifts at around 0.4, or a lookback time of about 4 Gyr. Presumably they have redshift coverage right down to z=0, but they don't say anything about that here.

They certainly knock off a few superlatives though. As well as being arguably the most distant direct detection of HI (excluding lensing) they also have, by a whisker, the most massive HI detection ever recorded – just shy of a hundred billion solar masses. For comparison, anything above ten billion is considered a real whopper.

All this comes at a cost. It took 95 hours of observations in this one tiny field and they only have six detections at this redshift. On the other hand, there's really just no other way to get this data at all (with the VLA it would take a few hundred hours per galaxy). Theoretically one could model how much HI would be expected in galaxies based on their optical properties and do much shorter, targeted observations which would be much more efficient. But this redshift is already high enough that optically the galaxies look pretty pathetic, not because they're especially dim but simply because they're so darn far away. So there just isn't all that much optical data to go on.

As you might expect, these six detections tend to be of extraordinarily gas-rich galaxies, with correspondingly high star formation rates. While they're consistent with scaling relations from local galaxies, their number density is higher than the local distribution of gas-rich galaxies would predict. That's probably they're most interesting finding, that we might be seeing the effects of gas evolution (albeit at a broad statistical level) over time. And it makes sense. We expect more distant galaxies to be more gas-rich, but exactly how much has hitherto been rather mysterious : other observations suggest that galaxies have been continuously accreting gas to replenish at least some of what they've consumed. For the first time we have some actual honest-to-goodness data* about how this works.

* Excluding previous results from stacking. These have found galaxies at even higher redshifts, but since they only give you the result in aggregate and not for individual galaxies, they're of limited use.

That said, it's probably worth being a bit cautious as to how well they can identify the optical counterparts of the HI detections. At this distance their beam size is huge, a ginormous 1.3 Mpc across ! That's about the same size as the Local Group and not much smaller than the Virgo Cluster. And they do say that in some cases there may well be multiple galaxies contributing to the detection. 

A particular problem is here is the phenomenon of surface brightness dimming. The surface brightness of a galaxies scales as (1+z)4. For low redshift surveys like AGES, z is at most 0.06, so galaxies appear only about 25% dimmer than they really are. But at z=0.4 this reaches a much more worrying factor of four. And the most HI-rich galaxy known (apart from those in this sample), Malin 1, is itself a notoriously low surface brightness object, so very possibly there's more galaxies contributing to the detections than they've identified here. It would be interesting to know if Malin 1 would be optically detectable at this distance...

On the other hand, one of their sources has the classic double-horn profile typical of ordinary individual galaxies. This is possible but not likely to arise by chance alignment of multiple objects : it would require quite a precise coincidence both in space and velocity. So at least some of their detections are very probably really of individual galaxies, though I think it's going to take a bit more work to figure out exactly which ones.

It's all quite preliminary so far, then. Even so, it's impressive stuff, and promises more to come in the hopefully near future.

Thursday, 1 August 2024

Going through a phase

Still dealing with the fallout from EAS 2024, this paper is one I looked up because someone referenced in in a talk. It caught my attention because the speaker mentioned how some galaxies have molecular gas but not atomic, and vice-versa. But we'll get to that.

I've no idea who the author is but the paper strongly reminds me of my first paper. There I was describing an HI survey of part of the Virgo cluster. This being my first work, I described everything in careful, meticulous detail, being sure to consider all exceptions to the general trends and include absolutely everything of any possible interest to anyone, insofar as I could manage it. Today's paper is an HI survey of part of the Fornax cluster and it is similarly careful and painstaking. If this paper isn't a student's first or early work, if it's actually a senior professor... I'm going to be rather embarrassed.

Anyway, Fornax is another nearby galaxy cluster, a smidgen further than Virgo at 20 compared to 17 Mpc. It's nowhere near as massive, probably a factor ten or so difference, but considerably more compact and dense. It also has less substructure (though not none) : the parlance being "more dynamically evolved" meaning that it's had more time to settle itself out, though it's not quite finished assembling itself yet. Its velocity dispersion of ~400 km/s is quite a bit smaller than than the >700 km/s of Virgo, but like Virgo, it too has hot X-ray gas in its intracluster space.

This makes it a natural target for comparison. It should be similar enough that the same basic processes are at work in both clusters : both should have galaxies experiencing significant tidal forces from each other, and galaxies in both clusters should be losing gas through ram pressure stripping. But the strengths of these effects should be quite different, so we should be able to see what difference this makes for galaxy evolution.

The short answer is : not all that much. Gas loss is gas loss, and just as I found in Virgo, the correlations are (by and large) the same regardless of how the gas is removed. I compared the colours of galaxies as a function of their gas fraction; here they use the more accurate parameter of true star formation rate, but the finding is the same. 

The major overall difference appears to be a survival bias. In Virgo there are lots of HI-detected and non-detected galaxies all intermingled. While there is a significant difference in their preferred locations, in Fornax this is much stronger : there are hardly any HI-detected galaxies inside the cluster proper at all. Most of the detections appear to be on the outskirts or even beyond the approximate radius of the cluster entirely. Exact numbers are a bit hard to come by though : the author's give a very thorough review of the state of affairs but don't summarise the final numbers. Which doesn't really matter, because the difference is clear.

What detections they do have, though, are quite similar to those in Virgo. They cover a range of deficiencies, meaning they've lost different amounts of gas. And that correlates with a similar change in star formation rate as seen in Virgo and elsewhere. They also tend to have HI extensions and asymmetrical spectra, showing signs that they're actively in the process of losing gas. Just like streams in Virgo, the total masses in the tails aren't very large, so they still follow the general scaling relations. 

So far, so standard. All well and good, nothing wrong with standard at all. They also quantify that the galaxies with the lowest masses tend to be the most deficient, which is not something I saw in Virgo, and is a bit counter-intuitive : if a galaxy is small, it should more easily lose so much gas as to become completely undetectable, so high deficiencies can only be detected in the most massive galaxies. But in Fornax, where the HI-detections may be more recent arrivals and ram pressure is weaker, this makes sense. They also quantify that the detections are likely infalling into the cluster for the first time in a now-standard phase diagram* which demonstrates this extremely neatly.

* Why they call them this I don't know. They plot velocity relative to the systemic velocity of the cluster as a function of clustercentric distance, and have nothing at all to do with "phases" in the chemical sense of the word.

The one thing I'd have liked them to try and this point would be stacking the undetected galaxies to increase the sensitivity. In Virgo this emphatically didn't work : it seems that there, galaxies which have lost so much gas as to be below the detection threshold have really lost all their gas entirely. But since Fornax dwarfs are still detectable even at higher deficiencies, then the situation might be different here. Maybe some of them are indeed just below the threshold for detectability, in which case stacking might well find that some still have gas.

Time to move on to the feature presentation : galaxies with different gas phases. Atomic neutral hydrogen, HI, is thought to be the main reservoir of fuel for star formation in a galaxy. The fuel tank analogy is a good one : the petrol in the tank isn't powering the engine itself. For that, you need to allow the gas to cool to form molecular gas, and it's this which is probably the main component for actual star formation.

There are plenty of subtleties to this. First, there's some evidence that HI is also involved directly in star formation : scaling relations which include both components have a smaller scatter and better correlation than ones which only use each phase separately. Second, galaxies also have a hot, low density component extending out to much greater distances. If the molecular gas is the fuel actually in the engine and the HI is what's in the tank, then this corona is what's in the petrol station, or, possibly, the oil still in the ground. And thirdly, cooling rates can be strongly non-linear : left to itself, HI gas will pretty much mind its own business and take absolutely yonks to cool into a molecular state.

Nevertheless this basic model works well enough. And what they find here is that while most galaxies have nice correlations between the two phases – more atomic gas, more molecular gas – some don't. Some have lots of molecular gas but no detectable HI. Some have lots of HI but no detectable molecular gas. What's going on ? Why are there neat relations most of the time but not always ?

Naively, I would think that CO without HI is the harder to explain. The prevailing wisdom is that gas starts off very hot indeed and slowly cools into warm HI (10,000 K or so) before eventually cooling to H2 (perhaps 1,000 K but this can vary considerably, and it can also be much colder). Missing this warm phase would be weird.

And if we were dealing with a pure gas system then the situation would indeed be quite bewildering. But these are galaxies, and galaxies in clusters no less. What's probably going on is something quite mundane : these systems are, suggest the authors, ones which have been in the cluster for a bit longer. There's been time for the ram pressure to strip the HI, which tends to be more extended and less tightly bound, leaving behind the H2 – which hasn't yet had the time to fully transform into stars. So all the usual gas physics is still in play, it's just there's been this extra complication of the environment to deal with.

What of the opposite case – galaxies with HI but no H2 ? How can you consume the H2 without similarly affecting the HI ? There things might be more interesting. They suggest several options, none of which are mutually exclusive. It could be that the HI is only recently acquired, perhaps from a tidal encounter or a merger. The former sounds more promising to me : gas will be preferentially ripped off from the outskirts of a galaxy where it's less tightly bound, and here there's little or no molecular gas. Such atomic gas captured by another galaxy may simply not have had time to cool into the molecular phase, whereas in a merger I would expect there to be some molecular gas throughout the process. 

Tidal encounters could have a couple of other roles, one direct, one indirect. The direct influence is that they might be so disruptive that they keep the gas density low, meaning its cooling rate and hence molecular content remains low (the physics of this would be complicated to explore quantitatively but it works well as a hand-waving explanation). The indirect effect is that gas at a galaxy's edge should be of lower metallicity : that is, purer and less polluted by the products of star formation. The thing is, we don't detect H2 directly but use CO as a tracer molecule. Which means that if the gas has arrived from the outskirts of a galaxy, it may be CO-dark. There could be some molecular gas present, it's just that we can't see it. Of course, to understand which if any of these mechanisms are responsible is a classic (and well justified) case of "more research is needed".

Tuesday, 23 July 2024

EAS 2024 : The Other Highlights

What's this ? A second post on the highlights of the EAS conference ? Yes ! This year I've been unusually diligent in actually watching the online talks I didn't get to see in person. Thankfully these are available for three months after the conference, long enough to actually manage to watch them but also, crucially, short enough to provide an incentive to bother. And I remembered a couple of interesting things from the plenaries that I didn't mention last time but which may be of interest to a wider audience.


Aliens ? There are hardly any talks which dare mention the A-word at astronomical conferences, but one of the plenaries on interstellar asteroids dared to go there. The famous interstellar visitor with the unpronounceable name of ʻOumuamua (which is nearly as bad as that Icelandic volcano that shut down European airspace a few years ago) got a lot of attention because Avi Loeb insists it must be an alien probe. He's wrong, and his claims to have found bits of it under the ocean have been utterly discredited. Still, our first-recorded visitor on a hyperbolic trajectory did do some interesting things. After accounting for the known gravitational forces, its rotation varies in a way that's inconsistent with gravity at the 10-sigma level. The speaker said that the only other asteroids and comets known to do this have experienced obvious collisions or have obvious signs of outgassing, neither of which happened here. He took the "alien" idea quite seriously.

Ho hum. No comment.


Time-travelling explosions. The prize lecture was by best-PhD student Lorenzo Gavassino, who figured out that our equations for hydrodynamics break down at relativistic velocities. Normally I would find this stuff incomprehensible but he really was a very good speaker indeed. And the main results is that they break down in a spectacular way. You might be familiar with simultaneity breaking, where events look different to observers at different speeds. Well, says Lorenzo, this happens to fluids moving at relativistic speeds in dramatic fashion : one observer should see a small amount of heat propagating at faster than the speed of light while another would see some energy travelling backwards through time. The result should be massive (actually, infinite) instabilities and the spontaneous formation of singularities. Accretion discs in simulations ought by rights to explode, and God knows what should happen to neutron stars.

The reason that this doesn't happen appears to be a numerical artifact which effectively smooths over this (admittedly small) amount of leakage. But what we need to do to make the equations rigorously correct, and how that would affect our understanding of these systems, isn't yet known.


Ultra Diffuse Galaxies may be tidal dwarfs in disguise. Another really interesting PhD talk was on how UDGs might form in clusters. When galaxies interact in low density environments, they can tear off enough gas and stars to form so-called tidal dwarfs. The key features of these mini-galaxies is that they don't have any dark matter (which is too diffuse to be captured in an interaction like this) and short-lived, usually re-merging with one of their parents in, let's say, 1 Gyr or so. But what if the interaction happens near the edge of a cluster ? Well, then the group can disperse and its members separated as they fall in, so the TDG won't merge with anything. Ram pressure will initially increase its star formation, increasing the stellar content in its centre and making it more compact, before eventually quenching it due to simple lack of gas. So there should be a detectable trend in these galaxies, from more compact to more diffuse going outwards from the cluster centre, all lacking in dark matter.

Of course this doesn't explain UDGs in isolated environments, but there's every reason to think that UDGs might be formed by multiple different mechanisms. A bigger concern was that the simulations didn't seem to include the other galaxies in the cluster, so the potentially very destructive effects of the tidal encounters weren't included. But survivorship bias was very much acknowledged : all galaxies, she said, get more compact closer to the centre, but not all survive at all. It's a really intriguing idea and definitely one to watch.


Even more about UDGs ! These were a really hot topic this year and whoever decided to schedule the session to be in one of the smaller rooms was very foolish, because it was overflowing. A few hardy souls stood at the back, but most gave up due to the poor air conditioning. Anyway, a couple of extra points. You might remember that I wasn't impressed by early claims than NGC 1052-DF4, one of the archetypes of galaxies without dark matter, had tidal tails. Well, I was wrong about that. New, deeper data clearly shows that it does have extended features beyond its main stellar disc. Whether that really indicates tidal disruption... well, I'll read the paper on that. And its neighbour DF2 remains stubbornly tail-less.

The other point is a new method for measuring distances to UDGs by looking at the stellar velocity dispersion of their globular clusters. This was the work of a PhD student who found that there's a relationship between this dispersion and the absolute brightness of the parent galaxy. Getting dispersion of the clusters is still challenging, requiring something like 20 hours on the VLT... but this is a far cry from the 100 HST orbits needed for the dispersion of the main stellar component of the galaxy itself. Apparently this work on the dispersion within individual clusters, so even one would be enough. They tested this on DF2 and DF4 and found a distance of....16 Mpc, right bang in the middle of the 13 and 20 Mpc claims that have been plagued with so much controversey.

Ho hum. No comment.


Fountains of youth and death. Some galaxies which today are red and dead appear to have halted their star formation very early on, but why ? One answer presented here quite decisively was due to AGN – i.e. material expelled from the enormous energies of a supermassive black hole in the centre of the galaxy. Rather unexpectedly it seems that most of this gas is neutral with only a small fraction being ionised, and detections of these neutral outflows are now common. In fact this may even be the main mechanism for quenching at so-called "cosmic noon" (redshifts of 1-2) when star formation peaked. Well, we'll see.

The other big talking point about fountains of ejected material was how galaxies replenish their gas. Here I learned two things I wish someone had told me years ago because they're very basic and I should probably have known them anyway. First, by comparing star formation rates with the mass of gas, one can estimate the gas depletion time, which is just a crude measure of how long the gas should last. And at low redshift this is suspiciously low, about a billion years. Does this mean we're in the final stages of star formation ? This is still about 10% or so of the lifetime of the Universe so it's never seemed all that suspicious to me.

The problem is that this depletion time has remained low at all redshifts. It's not that galaxies are suspiciously close to the end, it's that they should have already stopped forming stars and run out of gas long ago. Star formation can be estimated in different ways with no real constraint on distance, though gas content is a bit harder – we can't do neutral hydrogen in the distant Universe, but we can absolutely do molecular and ionised gas. Despite the many caveats of detail there's a very strong consensus that galaxies simply must be refuelling from somewhere.

One of those models has been the so-called galactic fountain. Galaxies expel gas due to stellar winds and supernovae, some of which escapes but most of which falls back to the disc. Now this is obvious as to how it explains why star formation keeps going in individual, local parts of the disc where the depletion time is too short, but how this explains the galaxy overall has never been clear to me. What might be going on is that the cold clouds of ejected gas (which look like writhing tendrils in the simulations) act as condensation sites as they move through the hot corona and fall back. Here gas in the hot, low density corona of the galaxy can cool, with the simulations saying that this mass of gas can be very significant. So the galaxy tops up its fuel tank from its own wider reservoir. It will of course eventually run out completely, but not anytime soon.

This is a compelling idea but there are two major difficulties, one theoretical and one observational. The theoretical problem is that the details of simulations really matter, especially resolution. If this is too low, clouds might appear to last much longer than they do in reality. One speaker presented simulations showing that this mechanism worked very well indeed while another showed that actually the clouds should tend to evaporate before they ever make it back to the disc, so this wouldn't be a viable mechanism at all. On the other hand, neither used a realistic corona : if it's actually not the smooth and homogenous structure they assume it to be, this could totally change the results.

The observational difficulty is that these cold gas clouds are just not seen anywhere. This is harder to explain but may depend on the very detailed atomic physics : maybe the clouds are actually warmer and more ionised than the predictions, or maybe colder and molecular. Certainly we know there can be molecular gas which is very hard to detect because it doesn't contain any of the tracer molecules we usually use; H2 is hard to detect directly so we usually use something like CO. 


And with that, I really end my summaries of EAS 2024, and return to regular science.

The Most Interesting Galaxies Are SMUDGES

Ultra Diffuse Galaxies remain a very hot topic in astronomy. You know the drill by now : great big fluffy things with hardly any stars and s...