Sister blog of Physicists of the Caribbean. Shorter, more focused posts specialising in astronomy and data visualisation.

Wednesday, 19 February 2025

Nobody Ram Pressure Strips A Dwarf !

Very attentive readers may remember a paper from 2022 claiming, with considerable and extensive justification, to have detected a new class of galaxian object : the ram pressure dwarf. These are similar to the much more well-known tidal dwarf galaxies, which form when gravitational encounters remove so much gas from galaxies that the stripped material condenses into a brand new object. Ram pressure dwarfs would be essentially similar, but result from ram pressure stripping instead of tidal encounters. A small but increasing number of objects in Virgo seem to fit the bill for this quite nicely, as they don't match the scaling relations for normal galaxies very well at all.

This makes today's paper, from 2024, a little late to the party. Here the authors are also claiming to have discovered a new class of object, which they call a, err... ram pressure dwarf. From simulations.

I can't very well report this one without putting my sarcastic hat on. So you discovered the same type of object but two years later and only in a simulation eh ? I see. And you didn't cite the earlier papers either ? Oh.

And I also have to point out an extremely blatant "note to self" that clearly got left in accidentally. On the very first page :

Among the ∼60 ram-pressure-stripped galaxies belonging to this sample, ionized gas powered by star formation has been detected (R: you can get ionized gas that is not a result of star formation as well, so maybe you could say how they have provided detailed information about the properties of the ionized gas, its dynamics, and star formation in the tails instead) in the tentacles.

No, that's not even the preprint. That's the full, final, published journal article !

Okay, that one made me giggle, and I sympathise. Actually I once couldn't be bothered to finish looking up the details of a reference so I put down "whatever the page numbers are" as a placeholder... but the typesetter fortunately picked up on this ! 

What does somewhat concern me at a (slightly) more serious level, though, is that this got through the publication process. Did the referee not notice this ? I seem to get picked up on routinely for the most minor points which frankly amount to no more than petty bitching, so it does feel a bit unfair when others aren't apparently having to endure the same level of scrutiny.

Right, sarcastic hat off. In a way, that this paper is a) late and b) only using simulations is advantageous. It seems that objects detected initially in observational data have been verified by theoretical studies fully independently of the original discoveries. That gives stronger confirmation that ram pressure dwarfs are indeed really a thing.

Mind you, I think everyone has long suspected in the back of their minds that ram pressure dwarfs could form. After all, why not ? If you remove enough gas, it stands to reason that sometimes part of it could become gravitationally self-bound. But it's only recently that we've had actual evidence that they exist, so having theoretical confirmation that they can form is important. That puts the interpretation of the observational data on much stronger footing.

Anyway, what the authors do here is to search one of the large, all-singing, all-dancing simulations for candidates where this would be likely. They begin by looking for so-called jellyfish galaxies, in which ram pressure is particularly strong so that the stripped gas forms distinct "tentacle" structures. They whittle down their sample to ensure they have no recent interactions with other galaxies, so that the gas loss should be purely due to ram pressure and not tidal encounters. Of the three galaxies in their sample which meet this criteria, they look for stellar and gaseous overdensities within their sample and find one good ram pressure dwarf candidate, which they present here.

By no means does this mean that such objects are rare. Their criteria for sample selection is deliberately strict so they can be extremely confident of what they've found. Quite likely there are many other candidates lurking in the data which they didn't find only because they had recent encounters with other galaxies, which would mean they weren't "purely" resulting from ram pressure. I use the quotes because determining which factor was mainly responsible for the gas loss can be extremely tricky. And simulation resolution limits mean there could be plenty of smaller candidates in there. The bottom line is that they've got only one candidate because they demand the quality of that candidate be truly outstanding, not because they're so rare as to be totally insignificant.

And that candidate does appear to be really excellent and irrefutable. It's a clear condensation of stars and gas at the end of the tentacle that survives for about a gigayear, with no sign of any tidal encounters being responsible for the gas stripping. It's got a total stellar mass of about ten million solar masses, about ten times as much gas, and no dark matter – the gas and stars are bound together by their own gravity alone. The only weird thing about it is the metallicity, which is extraordinarily large, but this appears to be an artifact of the simulations and doesn't indicate any fundamental problem.

In terms of the observational candidates, this one is similar in size but at least a hundred times more massive. Objects that small would, unfortunately, be simply unresolvable in the simulations because it doesn't have nearly enough particles. But this is consistent with this object being just the tip of a much more numerous iceberg of similar but smaller features. Dedicated higher resolution simulations might be able to make better comparisons with the observations, until someone finds a massive ram pressure dwarf in observational data.

I don't especially like this paper. It contains the phrase "it is important to note" no less than four times, it says "as mentioned previously" in relation to things never before mentioned, it describes the wrong panels in the figures, and it has many one-sentence "paragraphs" that make it feel like a BBC News article if the writer was unusually technically competent. But all of these quibbles are absolutely irrelevant to the science presented, which so far as I can tell is perfectly sound. As to the broader question of whether ram pressure dwarfs form a significant component of the galaxy population, and indeed how they manage to survive without dark matter in the hostile environment of a cluster... that will have to await further studies.

How To Starve Your Hedgehog

Today, two papers on hedgehogs quenched galaxies. It'll make more sense later on, but only slightly.

"Quenched" is just a bit of jargon meaning that galaxies have stopped forming stars, if not completely, then at least well below their usual level. There are a whole bunch of ways this can happen, but they all mostly relate to environment. Basically you need some mechanism to get the gas out of galaxies where it then disperses. In clusters this is especially easy because of ram pressure stripping, where the hot gas of the cluster itself can push out gas within galaxies. In smaller groups the main method would be tidal interactions, though this isn't as effective.

What about in isolation ? There things get tricky. Even the general field is not a totally empty environment : there are other galaxies present (just not very many) and external gas (just of very low density). But you also have to start to consider what might have happened to galaxies there over the whole of time, because conditions were radically different in the distant past.

To cut a long story short, what we find is that giant galaxies seem to have formed the bulk of their stars way back in a more exciting era when things were just getting started. Dwarf galaxies in the field, on the other hand, are still forming stars, and in fact their star formation rate has been more or less permanently constant.

This phenomena is called downsizing, and for a long time had everyone sorely puzzled : naively, giant galaxies ought to assemble more slowly, so were presumed to have taken longer to assemble their stellar population, whereas dwarfs should form more quickly. Simplifying, this was due to host of problems in the details of the physics of the models, and as far as I know it's generally all sorted out now. Small amounts of gas can, in fact, quite happily maintain a lower density for longer, hence dwarfs form stars more slowly but much more persistently.

Dwarfs are, of course, much more susceptible to environmental gas-loss removal processes than giants, and indeed dwarfs in clusters are mostly devoid of gas (except for recent arrivals). And so conversely, any dwarfs which have lost their gas in the field are unexpected, because there's nothing very much going on out there : all galaxies of about the same mass should have about the same level of star formation. There's no reason that some of them should have lost their gas and others held on to it - it should be an all-or-nothing affair.

That's why isolated quenched galaxies are interesting, then. On to the new results !


The first paper concentrates on a single example which they christen "Hedgehog", because "hedgehogs are small and solitary animals" and also presumably because "dw1322m2053" is boring, and cutesy acronyms are old hat. Wise people, I approve.

This particular hedgehog galaxy is quite nearby (2.4 Mpc) and extremely isolated, at least 1.7 Mpc from any nearby massive galaxies. That puts it at least four times further away than expected from the region of influence of any groups, based on their masses. It's a classic quenched galaxy, "red and dead", smooth and structureless, with no detectable star formation at all.

It's also very, very small. They estimate the stellar mass at around 100,000 solar masses, whereas for more typical dwarf galaxies you could add at least two or three zeros on to that. Now that does mean they can't say definitively if its lack of star formation is a really significant outlier, simply because for an object this small, you wouldn't expect much anyway. But in every respect it appears consistent with being a tiny quenched galaxy, so the chance that it has any significant level of star formation is remote.

How could this happen ? There are a few possibilities. While it's much further away from the massive groups than where you'd normally expect to see any effect from them, simulations have shown that it's just possible to get quenched galaxies this far out. But this is extraordinarily unlikely, given that they found this object serendipitously. They also expect these so-called "backsplash" galaxies (objects which have passed through a cluster and out the other side*) to be considerably larger than this one, because they would have formed stars for a prolonged time, right up until the point they fell into the cluster.

* I presume and hope this is a men's urinal reference.

Another option is simply that the star formation in small galaxies might be self-limiting, with stellar winds and supernovae able to eject the gas. This, they say, is only expected to be temporary (since most of the gas should fall back in after a while), so again the chances of finding something like this are pretty slim. But I'd have liked more details about this, since I would expect that for galaxies this small - and it really is tremendously small - the effects of feedback could be stronger than for more typical, more massive galaxies. Maybe stellar winds and explosions could permanently eject much more of the gas, although on the other hand galaxies this small would have fewer massive stars capable of this.

Similarly another possibility, which I don't think they mention, is quenching due to ram pressure in the field. Again, for normal dwarf galaxies, this is hardly a promising option. For ram pressure to work effectively, you need gas of reasonably high density and galaxies moving at significant speeds, neither of which happens in the field. But, studies have shown that galaxies in the field do experience (very) modest amounts of gas loss which correlates with the distance from the large-scale filaments. Ordinarily this is not really anything substantial, but for galaxies this small, it might be. Since a galaxy this small just won't have much gas to begin with, and removing it will be easy because it's such a lightweight, what would normally count as negligible gas loss might be fatal for a tiddler like this.

The most interesting option is reionisation. When the very first stars were formed, theory says, there were hardly any elements around except hydrogen and helium and a smattering of others. Heavier elements allow the gas to cool and therefore condense more efficiently, so today's stars are comparative minnows. But with none of this cooling possible, the earliest stars were monsters, perhaps thousands of times more massive than the Sun. They were so powerful that they reionised the gas throughout the Universe, heating it so that cooling was strongly suppressed, at least for a while. In more massive galaxies gravity eventually overcame this, but in the smallest galaxies it could be halted forever.

Hedgehog, the authors say, is right on the limit where quenching by reionisation is expected to be effective. If so then it's a probe of conditions in the very early universe, one which is extremely important as it's been used a lot to explain why we don't detect nearly as many dwarf galaxies as theory otherwise predicts*. The appealing thing about this explanation is the small size and mass of the object, which isn't predicted by other mechanisms.

* They do mention that the quenched fraction of galaxies in simulations rises considerably at lower masses, but how much of this is due to reionisation is unclear.

This galaxy isn't quite a singular example, but objects like this one are extremely rare. Of course ideally we'd need a larger sample, which is where the second paper comes in.


This one is a much more deliberate attempt to study quenched galaxies, though not necessarily isolated. What they're interested in is our old friends, Ultra Diffuse Galaxies, those surprisingly large but faint fluffy objects that often lack dark matter. In this paper the authors used optical spectroscopy to target a sample of 44 UDGs, not to measure their dynamics (the spectroscopic measurements are too imprecise for that) but to get their chemical composition. With this they can identify galaxies in a post-starburst phase, essentially just after star formation has stopped. That kind of sample should be ideal for identifying where and when galaxies get quenched.

I'm going to gloss over a lot of careful work they do to ensure their sample is useful and their measurements accurate. The sample size is necessarily small because UDGs are faint, and their own data finds that some of the distance estimates were wrong so a few candidates weren't actually UDGs after all. Their final result of 6 post-starburst UDGs doesn't sound much, and indeed it isn't, but these kinds of studies are still in their very early days and you have to start somewhere.

Even with the small size, they find two interesting results. First, the fraction of quenched UDGs is around 20%, much higher than the general field population. The stellar masses are a lot higher than the Hedgehog but still small compared to most dwarfs though, so this one needs to be treated with a bit of caution but it's definitely interesting. Second, while most quenched UDGs do appear to result from environmental effects, a few are indeed isolated. Which is a bit weird and unexpected. UDGs in clusters might form by gas loss of more "typical" galaxies, but this clearly can't work in the field, so why only a select few should lose gas isn't clear at all.


What all this points to isn't all that surprising, though in a somewhat perverse sense : it underscores that we don't fully understand the physics of star formation. The authors of the second study favour stellar feedback as being responsible for a temporary suppression of star formation. If this is common and repeated, with galaxies experiencing many periods of star formation interspersed with lulls, that could also make Hedgehog a bit less weird - if, say, it's forming/not forming stars for roughly the same total amount of time, then it wouldn't be so strange to detect it during a quenched phase. And of course the lower dark matter content of UDGs surely also has some role to play in this, although what that might be is anyone's guess.

As usual, more research is needed. At this point we just need more data, both observational and simulations. That we're still finding strange objects that're hard to explain isn't something to get pessimistic about though. We've learned a lot, but we're still figuring out just much further we have to go before we really understand these objects.

Monday, 17 February 2025

Sports Stars Can Save Humanity

I know, I know, I get far less than my proverbial five-a-day so far as reading papers goes. Let me try and make some small amends.

Today, a brief overview of a couple of visualisation papers I read while I was finishing off my own on FRELLED, plus a third which is somewhat tangentially related.


The first is a really comprehensive review of the state of astronomical visualisation tools in 2021. Okay, they say it isn't comprehensive, which is strictly speaking true, but that would be an outright impossible task. In terms of things at a product-level state, with useable interfaces, few bugs and plenty of documentation, this is probably as close as anyone can realistically get.

Why is a review needed ? Mainly because with the "digital tsunami" of data flooding our way, we need to know which tools already exist before we go about reinventing the wheel. As they say, there are data-rich but technique-poor astronomers and data-poor but technique-rich visualisation experts, so giving these groups a common frame of reference is a big help. And as they say, "science not communicated is science not done". The same is true for science ignored as well, of which I'm extremely guilty... you can see from the appallingly-low frequency of posts here how little time I manage to find for reading papers. 

So yeah, having everything all together in one place makes things very much easier. They suggest a dedicated keyword in papers "astrovis" to make everything easier to find. As far as I know this hasn't been adopted anywhere, but it's a good idea all the same.

Most of the paper is given to summarising the capabilities of assorted pieces of software, some of which I still need to check out properly (and yes, they include mine, so big brownie points to them for that !). But they've also thought very carefully about how to organise all this into a coherent whole. For them there are five basic categories for their selected tools : data wrangling (turning data into something suitable for general visualisation), exploration, feature identification, object reconstruction, and outreach. They also cover the lower-level capabilities (e.g. graph plotting, uncertainty visualisation, 2D/3D, interactivity) without getting bogged-down in unproductively pigeon-holing everything. 

Perhaps the best bit of pigeon-unholing is something they quote from another paper : the concept of explornation, an ugly but useful word meaning the combination of exploration and explanation. This, I think, has value. It's possible to do both independently, to go out looking at stuff and never getting any understanding of it at all, or conversely to try and interpret raw numerical data without ever actually looking at it. But how much more powerful is the combination ! Seeing can indeed be believing. The need for good visualisation tools is not only about making pretty pictures (although that is a perfectly worthwhile end in itself) but also in helping us understand and interpret data in different ways, every bit as much as developing new techniques for raw quantification. 

I also like the way they arrange things here because we too often tend to ignore tools developed for different purposes other than our own field of interest. And they're extraordinarily non-judgemental, both about individual tools and different techniques. From personal experience it's often difficult to remain so aloof, to avoid saying, "and we should all do it this way because it's just better". Occasionally this is true, but usually what's good for one person or research topic just isn't useful at all for others.

On the "person" front I also have to mention that people really do have radically different preferences for what they want out of their software. Some, inexplicably, genuinely want everything to do be done via text and code and nothing else, with only the end result being shown graphically. Far more, I suspect, don't like this. We want to do everything interactively, only using code when we need to do something unusual that has to be carefully customised. And for a long time astronomy tools have been dominated too much by the interface-free variety. The more that's done to invert the situation, the better, so far as I'm concerned.


The second paper presents a very unusual overlap between the world of astronomy and... professional athletes. I must admit this one languished in my reading list for quite a while because I didn't really understand what it was about from a quick glance at the abstract or text, mostly because of my own preconceptions : I was expecting it to be about evaluating the relative performance of different people at source-finding. Actually this is (almost) only tangential to the main thrust of the paper, though it's my own fault for misreading what they wrote.

Anyway, professional sports people train themselves and others by reviewing their behaviour using dedicated software tools. One of the relatively simple features that one of these (imaginatively named "SPORTSCODE") has is the ability to annotate videos. This means that those in training can go back over past events and see relevant features, e.g. an expert can point out exactly what and where something of interest happened – and thereby, one hopes, improve their own performance.

What the authors investigate is whether astronomers can use this same technique, even using the same code, to accomplish the same thing. If an expert marks on the position of a faint source in a data cube, can a non-expert go back and gain insight into how they made that identification ? Or indeed if they mark something they think is spurious, will that help train new observers ? The need for this, they say, is that ever-larger data volumes threaten to make training more difficult, so having some clear strategy for how to proceed would be nice. They also note that medical data, where the stakes are much, much higher, relies on visual extraction, while astronomical algorithms have traditionally been... not great. "Running different source finders on the same data set rarely generates the same set of candidates... at present, humans have pattern recognition and feature identification skills that exceed those of any automated approach."

Indeed. This is a sentiment I fully endorse, and I would advocate using as much visual extraction as possible. Nevertheless, my own tests have found that more modern software can approach visual performance in some limited cases, but a full write-up on that is awaiting the referee's verdict.

While this paper asks all the right questions, it presents only limited answers. I agree that it's an interesting question as to whether source finding is a largely inherent or learned (teachable) skill, but most of the paper is about the modifications they made to SPORTSCODE and its setup to make this useful. The actual result is a bit obvious : yes indeed, annotating features is useful for training, and subjectively this feels like a helpful thing to do. I mean... well yeah, but why would you expect it to be otherwise ? 

I was hoping for some actual quantification of how users perform before and after training – to my knowledge nobody has ever done this for astronomy. We muddle through training users as best we can, but we don't quantify which technique works best. That I would have found a lot more interesting. As it is, it's an interesting proof of concept, and it asks all the right questions, but the potential follow-up is obvious and likely much more interesting and productive. I also have to point out that FRELLED comes with all the tools they use for their training methods, without having to hack any professional athletes (or their code) to get them to impart their pedagogical secrets.


The final paper ties back into the question of whether humans can really outperform algorithms. I suppose I should note that these algorithms are indeed truly algorithms in the traditional, linear, procedural sense, and nothing at all to do with LLMs and the like (which are simply no good at source finding). What they try to do here is use the popular SoFiA extractor in combination with a convolutional neural network. SoFiA is a traditional algorithm, which for bright sources can give extremely reliable and complete catalogues, but it doesn't do so well for fainter sources. So to go deeper, the usual approach is to use a human to vet its initial catalogues to reject all the likely-spurious identifications.

The authors don't try to replace SoFiA with a neural network. Instead they use the network to replace this human vetting stage. Don't ask me how neural networks work but apparently they do. I have to say that while I think this is a clever and worthwhile idea, the paper itself leaves me with several key questions. Their definition of signal to noise appears contradictory, making it hard to know exactly how well they've done : it isn't clear to me if they're really used the integrated S/N (as they claim) or the peak S/N (as per their definition). The two numbers mean very different things. It doesn't help that the text is replete with superlatives, which did annoy me quite a bit.

The end result is clear enough though, at least at a qualitative level : this method definitely helps, but not as much as visual inspection. It's interesting to me that they say this can fundamentally only approach but not surpass humans. I would expect that a neural network could be trained on data containing (artificial) sources so faint a human wouldn't spot them, but knowing they were there, the program could be told when it found them and thereby learn their key features. If this isn't the case, then it's possible we've already hit a fundamental limit, that when humans start to dig into the noise, they're doing about as well as it's ever possible to do by any method. When you get to the faintest features we can find, there simply aren't any clear traits that distinguish signal from noise. Actually improving, in any significant way, on human vision, might be a matter of a radically different approach... but it might even be an  altogether hopeless challenge.

And that's nice, isn't it ? Cometh the robot uprising, we shall make ourselves useful by doing astronomical source-finding under the gentle tutelage of elite footballers. 

Or not, because that algorithms can be thousands of times faster can more than offset their lower reliability levels, but that's another story.

Phew ! Three papers down, several hundred more to go.

The Most Interesting Galaxies Are SMUDGES

Ultra Diffuse Galaxies remain a very hot topic in astronomy. You know the drill by now : great big fluffy things with hardly any stars and s...