Sister blog of Physicists of the Caribbean. Shorter, more focused posts specialising in astronomy and data visualisation.

Thursday, 3 July 2025

The Bunny Rabbit of Death

Today's paper is a bit more technical than usual, but sometimes you've gotta tackle the hard stuff.

Ram pressure stripping is something we seem to understand pretty well on a large scale. When galaxies enter a massive cluster containing its own gas, pressure builds up that can push out the gas in the galaxy. If it's going fast enough, and/or the cluster gas is dense enough, then the galaxy can loose all of its gas pretty quickly. No ifs or buts, it just looses all its gas, stops forming stars, realises it's made incredibly poor life choices, and dies.

Yeah, literally, it dies. It's run out of fuel for star formation, which means all its remaining massive blue stars aren't replaced when they explode as supernovae in a few million years. Slowly it turns into a "red and dead" smooth, structureless, boring disc, and maybe eventually an elliptical. There's a wealth of evidence that ram pressure is the dominant mechanism of gas loss within clusters, and everything seems to just basically... work. Which is nice.

But, as ever, the details are where it gets interesting. In the extreme case, what you'll see is a galaxy with a big long tail of gas, one single plume stretching off until it's torn apart and dissolved in the chaos of the cluster. 

Even here things can be complicated though. Some tails seem to have multiple components : extremely hot X-ray emitting gas, cooler neutral atomic hydrogen detectable with radio telescopes, intermediate temperature ionising gas that emits over very narrow "Hα" optical wavelengths, and very cold gas indeed that emits in the sub-mm regime. They may or may not have stars forming within the plume, and all of these different components can have radically different structures. Or they might all line up quite neatly. Sometimes all of these phases are present, sometimes just one or two.

And then, if a galaxy isn't in the extreme case, it can be even more complicated. If the ram pressure isn't enough to accelerate the gas to escape velocity, it can still be pushed out only to fall back in somewhere else in the disc. In short, it gets messy.

This paper attempts to understand one of those messy cases. It's part of the ALMA JELLY program, a large ALMA observing program run by my officemate Pavel Jachym (conflict of interest : declared ! BOX TICKED). Here they introduce the first analysis of one of their 28 target galaxies and tackle the important question (though they would never dare state it thus) : 

Why does it look like the Playboy bunny rabbit ?

Wait, wait... why is it called ALMA JELLY ? It's not an acronym as far as I know. Instead, "jellyfish" galaxies have become a popular name for galaxies experiencing ram pressure stripping as some of them have distinct, narrow tails that look very much like the tentacles of a jellyfish. The term has become somewhat abused lately, often used for any ram-pressure stripping galaxy regardless of what its tail looks like. Here they attempt to take back control of the term and define it as galaxies which have stars forming it their stripped material. This often occurs in narrow tendrils so it's a pretty good proxy for jellyfish-like structures, and highlights the unusual physics at work in these cases.

And, why ALMA ? ALMA observes the cold molecular gas, which is generally agreed to be the main component of star formation. The target here already has many observations at other wavelengths, but the molecular gas has been traditionally tough to observe. Now they can fill in the gap, and with extreme resolution too. 

So, the bunny rabbit. The first target for ALMA JELLY is NGC 4858. It's certainly a prime example of a jellyfish galaxy, with clear, bright tendrils of stars extending in one direction directly away from the centre of the Coma cluster in which it resides. It's also close to the cluster centre, where ram pressure ought to be very strong. Its got observations at a bunch of different wavelengths and it is, in short, a right proper mess. Really, it's the kind of thing I might be minded to throw up my hands and say, "hahahah no, I'm not touching that with a barge pole". Or, failing that, I might wave my hands furiously and say, "something something HYDRODYNAMICS !".

Hydrodynamic effects, the complicated interactions between two or more different fluids, are an easy get-out. Mixing of fluids causing extremely complex structures, so if something's a mess, it's a safe bet that hydrodynamics can explain it. Though, in that case you ought to run simulations to test if that really works or not.

Here they don't. Instead they try the much braver task of explaining it without any dedicated simulations, and even those simulations they do use don't have full hydrodynamic effects – just some very basic approximations of the major forces at work from the external gas. And yet they seem to have come up with a pretty convincing explanation.

It works like this. First, NGC 4858 is a grand design spiral, with two prominent spiral arms. As it rotates, each arm moves through a region where its subjected to varying ram pressure forces, which are greatest on the side rotating away from the cluster centre (where the gas is moving fastest away from the cluster, making it easiest to remove). A single, dense arm thus gives rise to a single, dense plume of gas – a tail. But this tail gas preserves some of the rotation it had around the galaxy's centre, so it doesn't just get blasted out into space – it keeps moving around the galaxy. This brings it into the shadow of the galaxy, protecting it from the wind of the cluster. Some of the gas is lucky enough that the greatly reduced ram pressure is now essentially impotent, and it falls back onto the galaxy.

Not all of it though. Some keeps going. If any makes it right around to the other side of the galaxy, it moves back into the zone of death and gets finally stripped away by the cluster gas once and for all. The key is that before it reaches this point, the gas gets compressed as it starts to hit the wind again. In the simulations they use as a reference, the galaxy doesn't have prominent spiral arms and shows a single prominent tail; they surmise that because NGC 4858 has two arms, this could naturally give rise to two tails (or ears).

Their observations also show direct evidence of gas returning to the galaxy. The ALMA observations allow them to make a velocity map of the gas, and there's one big feature which is discontinuous with the rest of the velocity structure. And again, that fits with the basic model of how they expect rotating gas to behave.

I've simplified and shortened this one quite a lot, missing out on any number of interesting details. And there's an awful lot more they could still do with this data. But to me, the first thing I wondered when I first saw the ALMA image was "why is it a bunny rabbit ?". I was expecting this to have a much more complex non-answer, featuring hand-waving and invocations to hydrodynamics galore, possibly involving a chicken sacrifice. As it is, they managed to come up with a decent explanation without any of that, which is no mean feat. Both the bunnies and the chickens can rest easy.

Now all they have to do is convince Playboy to give them a sponsorship deal...

Wednesday, 2 July 2025

The Miniscule Candidate

Following on from those couple of papers on possible dark galaxies, comes... another paper on dark galaxies !

This one is a completely different sort of beast. While identifying optically dark galaxies is normally done by looking for their gas instead of their stars, here they use good old-fashioned optical telescopes instead. Even weirder, having found something which is optically faint but not dark, they then go on to infer its dark matter content without measuring its dynamics at all !

If this all sounds very strange, that's because it is. It's by no means crazy, but it must be said that some of the claims here should be taken with a very large pinch of salt.

Let's go right back to basics. A good working definition of a galaxy is a system of gas and/or stars bound together by dark matter. True, there are some notable exceptions like so-called tidal dwarf galaxies, but it's questionable whether we shouldn't drop the "galaxy" for those objects altogether (maybe replace it with "system" or something instead). Clearly they're physically very different from most galaxies, which are heavily mass-dominated by their dark matter.

A dark galaxy, then, is just a dark matter halo with maybe some gas but definitely no stars. Or is it ? For sure, if it really has literally zero stars, then such an object would definitely count as a dark galaxy. But what if it had just one star and billions of solar masses worth of dark matter ? Would it really be worth getting hung up on that point ? Presumably the physics involved in its formation would be basically the same as a truly dark object.

Generally speaking, most people would allow an object to qualify as a dark galaxy even if it had some small mass in stars. At present there's no strict definition, however, and so few candidate objects are known that setting a quantitative limit wouldn't really help. Right now, we don't know nearly enough about the physics of the formation of such objects, and indeed the jury's still out on when any of them exist at all. 

(Some people prefer the term "almost dark", which annoys me intensely. I prefer to call them dim when they have some detectable stars, but it hasn't caught on).

Anyway, you can see how this explains using an optical telescope to search for dark galaxies. But actually, here they go a step further. Rather than looking for the ordinary stellar emission from galaxies, which are normally in diffuse discs, they look only for the light emitted by the compact, relatively bright globular clusters. Most galaxies have these dense starballs which orbit around in their halos quite separately from their main stellar disc. What these authors are looking for are cases where they find groups of globular clusters without an accompanying disc : essentially, star clusters orbiting all by themselves in their dark matter halos. 

This is an interesting grey area in terms of calling something a dark galaxy, but I'd be inclined to say such objects would qualify. The physics at work in forming dense globular clusters and the diffuse stellar disc is quite different, so at the very least, these would certainly be extremely interesting.

Here they present the imaginatively named "Candidate Dark Galaxy 2". Really ? Yes, really. That's the name they're going with. Bravo, team.

(Actually, snarkcasm aside, this is a wee bit insulting, considering that there have been many candidate dark galaxies over the years, but I'll let that pass).

It turns out they had a previous candidate (you can guess the name) which is even more extreme than this one. CDG-1* consists of four globular clusters in close proximity to each other with no detectable diffuse emission between them at all. I won't attempt to discuss the complicated statistical methods they use to identify globular clusters without parent galaxies; at the words "trans-dimensional Markov chain" my eyes glazed over anyway. I can safely mention a few points though : 1) They don't have spectroscopic measurements of the globular clusters so they can't robustly estimate their distances*; 2) Their initial catalogues of globular cluster candidates are surely incomplete, but 3) Since they do careful inspection of the candidate cluster groups they do find, we can be confident that the associations they identify are real.

* I honestly can't remember if I heard about this at the time or not. I may have missed it or just forgotten about it.

* Spectroscopy gives you velocity, which is a very powerful constraint on (though not quite a direct measure of) distance.

CDG-2 initially consisted of three globular clusters, but here, using new data from Hubble and Euclid, they identify a fourth. While they still don't have spectroscopy, the new data confirms that the candidates are all unresolved. That means they cannot possibly be close objects, and in fact their colours and other parameters are consistent with their being in the Perseus galaxy cluster* at 75 Mpc distance. So it seems very unlikely that they're either significantly closer or further away. And while their might be a few free-floating globular clusters in Perseus (ripped off their parent galaxies by tidal encounters and the like), it's not very likely that they'd happen to be so close together.

* This can sometimes get very confusing. A globular cluster is a cluster of stars that orbits around a parent galaxy; a giant galaxy might host, say, several dozen such objects. A galaxy cluster is a whole bunch of galaxies, each with their own population of globular clusters, all swarming around together.

The killer argument that this is highly likely to be an actual galaxy, though, is that here they detect diffuse stellar mission between the globular clusters. The thing just looks like a galaxy, albeit an extremely faint one. The chance of a tidal encounter creating something like this isn't worth considering.

Ahh, but is it a dark galaxy ? That's where things get a lot more speculative. While we can be pretty sure about the distance of the object and their physical association, only spectroscopic measurements would really give a good handle on the total mass. Measuring how fast things are moving lets you infer how much mass you need to hold them together. Without this, they rely on scaling relations, extrapolating based on the globular clusters to infer a massive amount of dark matter : probably there are a few million solar masses of stars present in total, but it could easily have a hundred billion solar masses of dark matter based on the scaling relations. 

These are, however, truly enormous extrapolations. Given that Ultra Diffuse Galaxies are now known which have significantly lower dark matter contents than typical galaxies, but these too have globular clusters, I'd be wary about digging any deeper into this one until they get some spectroscopy.

 Even so, it's clearly a very interesting object indeed. Arguably even more interesting, however, is CDG-1, which still has no diffuse emission detected at all. Even if the extreme dark matter content turns out to be a wrong estimate, if either of them have any at all, they're still super weird objects. Hopefully when they find CDG-3 I won't be caught quite so unawares.

Friday, 6 June 2025

They're Heee-re...

Or are they ?

Today, two papers on my favourite science topic of all : dark galaxies. In the past there have been a multitude of candidate detections but spread out very thinly. You get, I'd guestimate, of order one or two such claims per year on average, with the total number now being somewhere in the low to mid tens. And not a single one is entirely convincing. Every single object is essentially unique, with its own particular considerations that make it more and/or less likely to be a dark galaxy.

Both of these papers claim to have alleviated the problem by finding a whole bunch more candidates. The first uses new data from the ASKAP telescope and comes up with 55 potential objects, while the second uses archival Arecibo data and finds 142. Impressive stuff – but are any of them plausible, or have the previous problems just reappeared in a larger sample ?

There are many difficulties with identifying a dark galaxy candidate. The resolution of radio telescopes that can detect their gas content is often much lower than optical instruments, which means you see a big blurry smudge on the sky. That makes pinpointing the exact position of the gas difficult, so it's hard to say whether it has an optical counterpart or not. It also makes estimating its total mass tricky : for this you need a precise measure of its size, so without it you can't really say how much dark matter it really has. And even if you do have good resolution, you need good optical data as well to say if it's really dark or just very dim (though when you get to sufficiently dim objects the difference is arguably not that important).

An even bigger problem happens when you manage to overcome all this. Even if you have an isolated gas blob with the signatures of stable rotation that would need lots of dark matter to hold it together, and even if you're darn sure it's so optically faint that it might as well be dark... it's damn hard to say if the thing really is stable. You could just be seeing a bit of fluff leftover from some interaction or other, which can sometimes mimic the appearance of a dark galaxy. Nevertheless, there have been a few cases where "dark galaxy" at least looks like a very plausible explanation, if never any where we can be certain that's really what's been found.

Both of the papers attempt to do much the same thing though in slightly different ways. Starting with large HI samples (30,000 for ALFALFA and 2,000 for WALLABY) they combine this with optical data sets and trim them down in various ways : quality of the HI signal, confidence in the lack of optical counterpart, isolation, etc. ALFALFA (the Arecibo data) has an enormous area of coverage and huge sample size on its size, while WALLABY (from the ASKAP telescope) has higher sensitivity and resolution. 

Since even the final candidate catalogues are, by the standards of dark galaxy research, really quite large, I'd be reluctant to say, "yep, this is definitely the solution, hurrah chaps, we've found them !". But nor would I at all dismiss them out of hand. Rather I would look at both of these papers as being potentially the foundation of interesting research, but it's too soon for any definitive results yet. These are both very solid starts, but we need to examine each and every object here in more detail, or at least a subsample. We need higher resolution data in all cases, deeper optical data... and most importantly, detailed studies of the local environment. We need to find the quintessential case of an isolated object with no plausible other origins, preferably rotating nice and quickly (which would mean fast dissipation if it wasn't bound by dark matter).

All that requires very careful, detailed work. Which of course we can now do, so kudos to them for that. But scientifically I'm neither excited nor dismayed. I am... intrigued.

The first paper finds its dark galaxies pretty much everywhere throughout its fields. There's not really any distance bias, so they occur at all masses – a few at really quite respectable standards even when compared with optically bright galaxies. Line widths look to be typically around 100 km/s, which is where we'd naively expect rotation – and hence a dark matter component – to be needed for stability. Sadly the resolution isn't good enough for them to attempt dynamic mass estimates, though this seems to me a bit strange – they have the upper size limit from the HI, so they could at least put a broad constraint on it. 

The other oddity is that they model the optical light profile of all their sources, where detected. This is ideal for quantifying whether any are Ultra Diffuse Galaxies (which are possibly closely related to truly dark galaxies) but they don't seem to do this. Maybe that's for a future paper.

The second paper attempts a lot more science. I have to say it's both strange and refreshing to see a member of the ALFALFA team being at least a little more enthusiastic about dark galaxy candidates; normally they insist on calling them 'almost darks' – including the quotes – which gets very annoying. None of that here ! I should stress, though, that both papers absolutely treat everything with the caution it deserves, so don't mistake the brevity of my summary as evidence that they leap to conclusions. Neither group does that – I'm omitting the caveats just to get to the point.

Which for this second paper is as follows. As per the first, their candidates are everywhere, spanning a wide range of masses and line widths, but generally found in less dense environments than bright galaxies. They have higher gas fractions (relative to their inferred dark matter masses*) than optically bright galaxies of similar masses. And these properties are qualitatively similar to what's found in numerical simulations of galaxy formation that produce dark galaxies.

* Being a bit more gung-ho than the first group, they assume a size of the galaxy based on the scaling relation with respect to HI mass, hence they get a dark mass estimate.

All this is very matter-of-fact, commendably so. It's a huge sign of much how things have changed in the last couple of decades : when I went to my first conference, back in 2007, dark galaxies were viewed by many as... not exactly fringe, but not really mainstream either. Most people agreed that they could at least exist, but were skeptical of their whole raison d'être – that they would be numerous enough to explain why cosmological models were massively overpredicting how many galaxies we would see. Indeed, for the next few years if often felt as if hardly anyone really believed in the standard models of galaxy formation, even if nobody had any better ideas to replace it. Quite frankly, if anyone had suggested they'd found a hundred or more dark galaxy candidates, no matter how cautiously, they'd have been laughed at. It wouldn't have been a career-ending move but it wouldn't have won them any friends either.

All that seems to have largely faded. The original models of galaxy formation, where gas falls into dark matter halos and a bunch of complicated stuff happens, now seem very much more popular, and so dark galaxies no longer seem like an almost dirty subject. What's happened is that we've got a lot better at doing all that complicated stuff and many of the problems which looked horrendous now look, if hardly definitely solved, then at least an awful lot more solvable. 

So, good work people. It's going to be extremely interesting to see how this pans out over the next few years. Watch this space.

Thursday, 27 March 2025

The Most Interesting Galaxies Are SMUDGES

Ultra Diffuse Galaxies remain a very hot topic in astronomy. You know the drill by now : great big fluffy things with hardly any stars and sometimes little or no dark matter, not really predicted in numerical simulations. I'm not going to recap them again because I've done this too many times, so I leave it as an exercise for the reader to search this blog and learn all about them. Get off yer lazy arses, people !

UDGs were first found in clusters but have since been found absolutely everywhere. Why clusters ? Well, because they're so faint, getting redshift (i.e. distance) measurements of them is extremely difficult. This means their exact numbers are fiendishly difficult to characterise : without distance you can't get size, which is one of their distinguishing properties – so without size you can't even count them. And if you can't count them, you can't really say much about them at all.

Getting distances in clusters, however, is much easier. There the distance to the whole structure is anyway known. The first studies found lots of UDG candidates in clusters but very few in control fields, so most of those are certainly cluster members rather than just being coincidentally aligned on the sky. Of course it's always possible that a small fraction (at the few percent level or less) weren't really in the cluster and therefore not truly UDGs, but statistically, the results were definitely reliable.

The SMUDGES project (Systematically Measuring UDGs) is a major effort to begin to overcome the limitation of relying on clusters for distance estimates*. In essence, they try to develop a similar procedure for clusters but which can be applied to all different environments. They want results which are at least statistically "good enough" to estimate the distance, even if there's some considerable margin of error. 

* The main alternative thus far has been gas measurements, which give you redshift without relying on the much fainter optical data. This, however, has its own issues.

This paper is mainly a catalogue, and to be honest I rarely bother reading catalogue papers. In fact I only read this one to see what low-level methods they used to do the size estimates, since we have some possible candidate UDGs of our own we want to check. But as it turned out, they also present some interesting science as well, so here it is.

Most of the paper is given to describing these methodologies and techniques. It's pretty dry but important stuff, and like with the first cluster-based studies, they can't be sure that absolutely every candidate they find is really a UDG. Actually these measurements are, inevitably, quite a lot less reliable than the cluster studies, but they're careful to state this and the results are still plenty good enough to identify interesting objects for further study.

One interesting selection effect they note early on is that studies of individual objects tend to overestimate their masses (compared to studies of whole populations), since these tend to be particularly big, bright, and prominent. This at least helps begin to explain why some division has arisen in the community regarding the nature of UDGs : the objects studied by different groups are similar only at a broad-brush level, and in detail they may have significant differences. That's not an explanatory bias that was obvious to me, but maybe it should have been. It seems perfectly sensible with hindsight, at any rate.

And, once again, this is another study where the authors resort to flagging dodgy objects by eye, in another example of how important it is to actually look at the data. The machines haven't replaced us yet.

I won't do a blow-by-blow description of their procedures this time, but their final catalogue comprises about 7,000 objects, which they supplement with spectroscopic data where available. One of the main topics they address is the big one : what exactly are UDGs ? Are they galaxies with normal, massive dark matter halos but few stars, or do they instead have weird dark matter distributions ?

They conclude... probably the former. But this is not to say that they are "failed Milky Way" galaxies that have just not formed many stars for some reason : at the upper end they're probably still a few times less massive than that, and at the lower end that might be more than a factor ten difference. So mostly dwarf galaxies, but with normal dark matter distributions and very few stars. They get mass estimates from a combination of counting the number of globular clusters, which correlates with the total halo mass in normal galaxies, and their own statistical method to estimate other galaxy properties (which I don't fully understand). 

These relations don't always work well, however, sometimes experiencing "catastrophic failure", by which they mean errors of an order of magnitude or more. Why this should be is impossible to say at this stage, but, intriguingly, might point to the dark matter distribution being indeed different in UDGs compared to normal galaxies, at least some of the time. Overall though this appears unlikely, because to make this work with the observed scaling relations, the dark matter would have to be more concentrated than expected, even though the stars are the exact opposite : much more spread out than usual.

Bottom line : they think UDGs are mainly dwarf galaxies (though a few may be giants), with normal dark matter contents but very poor star formation efficiency for whatever reason. I'm not so sure. They say the distribution of some parameters (e.g. stellar mass within a given radius) is the same for both UDGs and other galaxies but to me they look completely different; it doesn't help that the figure caption states two colours when there are clearly three actually used. What's going on here I don't know, but very possibly I've missed something crucial.

Of course this paper won't solve anything by itself, but it gives a good solid start for further investigations. As with the previous post, this is another example of how important it is to classify things in a homogenous way. At least one SMUDGES object is found within our own AGES survey fields, and was in fact known to much earlier studies. Sometimes what can look at first glance to be a normal object actually turns out to be something much more unusual, but it's only when you have good, solid criteria for classification that this becomes apparent. 

Which is all very good news for AGES. I suspect there are actually quite a lot more UDGs lurking in our data. All we need is a team of well-armed and angry postdocs to track them down... i.e. a great big healthy grant. Well, a man can dream.

Dey's Blue Blobs

Today's paper is more exciting than I can fully let on.

In the last few years there have been a handful of seemingly-innocuous discoveries in Virgo that don't quite fit the general trends for normal galaxies. They're very faint, very blue, metal-rich*, and some are incredibly gas-rich. The most convincing explanation thus far is that they're ram pressure dwarfs : not galaxies exactly, but bound systems of stars that formed from condensations of gas stripped by ram pressure

* Meaning they have lots of chemicals besides hydrogen, because astronomers have weird conventions like that.

The advantages of this explanation is that ram pressure is a high speed phenomenon, so could easily explain why the objects are so far from any candidate parent galaxies (tidal encounters can do this too, but usually require lower interaction velocities), as well as why they're so metal-rich. Primordial gas is basically nothing but hydrogen and helium, and to get complex chemistry you need multiple cycles of star formation, which makes it virtually certain that the gas here must have originated in galaxies. Why exactly they've only just started forming stars is unclear, though it's possible they do have older stellar populations which are just too faint to identify. And these things really are faint, with just a few thousand solar masses of stars... in comparison to the usual millions or billions expected in normal galaxies.

One of the main problems in understanding these objects has been the understandably crappy statistics. With only a half-dozen or so objects to work with, any conclusions about the objects as a population are necessarily suspect. That's where this paper comes in.

Finding such objects isn't at all easy. They're difficult to parameterise and tricky for algorithms to handle, so they opt for a visual search. And quite right too ! Humans are very, very good at this, as per my own work (which I'll get round to blogging soon). Having just one person run the search would risk biases and incompleteness, so they use a citizen science approach based on Galaxy Zoo

The result was a total of nearly 14,000 "blue blob"* candidates. But this is being extremely liberal, and many of these might just be fluff : noise or distant background objects or whatever. A more rigorous restriction in which at least three people had to identify each candidate independently reduces this to just 658. Further inspection by experts trimmed this to 34 objects – a still more than respectable improvement over previous studies. And while I previously berated them for claiming that the objects only exist in clusters without having looked elsewhere, this time they at least looked at Fornax as well as Virgo. Fornax is another cluster, but interestingly no candidates were found there.

* C'mon guys, this is the name we're going with ? Really ? Oh. Well, fine. Suit yourselves.

But they don't stop with the results of the search. They cross-correlate their results with HI gas measurements from ALFALFA and, yes, AGES (thanks for the citations, kindly people !), and also observe eight of them with the 10m-class Hobby-Eberly Telescope for spectroscopy of the ionised gas. This is extremely useful as it provides a robust way of verifying that these objects are indeed in the cluster and not just coincidentally aligned, and also shows the the gas in the objects is being affected by the star formation.

Let me cover the main conclusions before I get to why I'm so excited by this work. First, their findings are fully consistent with and support the idea that these are ram pressure features. Their spectroscopy confirms the high metallicity of the objects, comparable to tidal dwarfs – so they have indeed formed by material which was previously in galaxies. They avoid the very centre of the cluster (where they'd likely be rapidly destroyed) and are preferentially found where ram pressure is expected to be effective. 

There's also an interesting subdivision within these 34 candidates. 13 of these are "rank 1", meaning they are almost certainly Virgo cluster objects, whereas the others are "rank 2" and are likely to have some contamination by background galaxies. Most of the rank 2 objects follow the general trends in colour and magnitude as for normal galaxies, but the rank 1 are noticeably bluer. They're also forming stars at a higher than expected rate (though, interestingly, not if you account for their total stellar mass). So indeed these are galaxy-like but not at all typical of other galaxies : they are galaxian, not galaxies.

Now the fun stuff. They identify two supposedly optically dark clouds I found in Virgo way back when and have since based most of my career on, hence – exciting ! They do have optical counterparts after all, then. Actually, one these is relatively bright, and I suggested it as a possible counterpart back in 2016. But it wasn't convincing, and its dynamics didn't seem to match well at all. These days of course everyone is all about the weird dynamics, but back then this seemed like a pretty good reason to rule it out. Since then, our VLA data has independently confirmed the association of the stars and the gas, and Robert Minchin is writing that one up as a publication.

That object has about twenty times as much gas as stars. The second object is altogether fainter, having a thousand times or more gas than stars ! Even with our VLA data we couldn't spot this*, and I probably wouldn't even believe this claim if they didn't have the optical spectroscopy to support it. It looks likely that in this case we're witnessing the last gasp of star formation, right at the moment the gas dissolves completely into the cluster.

* The VLA data has much better resolution than the original Arecibo data, so it can localise the gas with much greater accuracy and precision. This means that it can show exactly where the HI is really located, so if there's even a really pathetic optical counterpart there, we can be confident of identifying it. But of course, that counterpart must be at least visible in the optical data to begin with.

While they comment directly on two of our objects, they actually implicitly include another three measurements in the table. We never identified these as being especially weird; they just look like faint blue galaxies but nothing terribly strange. And that really underscores the importance of having enough resources to dedicate to analysing areas in detail, which, frankly, we don't. It also shows how important it is to quantify things : visual examination is great for finding stuff, but it can't tell you if an object is a weird outlier from a specific trend. Even more excitingly, almost certainly it means that there are a lot more interesting objects in our data that have already been found but not yet recognised as important.

But the most fun part came from doing a routine check. Whenever anyone publishes anything about weird objects in our survey fields, I have a quick look to see if they're in our data and we missed them, just in case. Every once in a while something turns up. This is very rare, but the checks are easy so it's worth doing. And this time... one of the other blue blobs has an HI detection in our data we previously missed.

Which is very cool. The detection is convincing, but there are very good reasons why we initially missed it. But I don't want to say anything more about it yet, because this could well become a publication for my PhD student. Watch this space.

Sunday, 2 March 2025

Taking galaxies off life support

Very long-term readers may remember my anguished efforts (almost a decade ago) to build a stable disc galaxy. Sweet summer child that I was, I began by trying to set up the simulations to just have gas or stars, but no dark matter. I thought – understandably enough – that adding more components would just make things more complicated, so best to start simple. I was planning to gradually ramp up the complexity so I could get a feel for how simulations worked, eventually ending up with a realistic galaxy that would sit there quietly rotating and not hurting anyone.

That wasn't what I got. Instead of a nice happy galaxy I got a series of exploding rings instead. Had that been a real galaxy, millions of civilisations would have been flung off into the void.

It turns out that dark matter really is frightfully necessary when it comes to keeping galaxies stable. Dark matter is a galaxy's emotional support particle, preventing it from literally flying apart whenever it has a mild gravitational crisis. Stable discs are easy when you have enough dark mass to hold them together.

(Of course, this is only true in standard Newtonian gravity. Muck about with this and you can make things work without dark any matter at all, but I'm not going there today.)

You don't always need dark matter to keep things together though. Plenty of systems manage just fine without it, like planetary systems and star clusters. But it's come as a big surprise to find that there are in fact quite large numbers of galaxies which have little or no dark matter, a result which is now reasonably (and I stress that this is an ongoing controversy) confirmed. We always knew there'd be a few such oddballs, if only from galaxies formed from the debris of other galaxies as they interact. But nobody thought there'd be large numbers of them existing in isolation. So what's going on ?

Enter today's paper. This is one in a short series which to be quite honest I'd completely forgotten about, partially because the authors forgot to give the galaxy a catchy nickname. Seriously, they could learn a lot from those guys who decided to name their galaxy Hedgehog for no particular reason. I'm only half-joking here : memorable names matter !

But anyway, this was an example of a UDG with lots of gas that appeared to have no dark matter at all. I wasn't fully convinced by their estimated inclination angle though, for which even a small error can change the estimated rotation speed and thus the inferred dark matter content substantially. A independent follow-up paper by another team ran numerical simulations and found that such an object would quickly tear itself to bits, whereas if if was just a regular galaxy with a very modest inclination angle error then everything would be fine. And there have been many other such studies of different individual objects, all of them mired in similar controversies. 

Since then, however, I've become much more keen on the idea that actually, a lot of these UDGs really do have a deficit or even total lack of dark matter after all. The main reason being this paper, which is highly under-cited in my view. Now it's entirely plausible that any one object might have its inclination angle measured inaccurately*. But they showed that the inclination-corrected rotation velocity of the population as a whole shows no evidence of any bias in inclination. Low inclinations, high inclinations, all can give fast or slow rotating galaxies, consistent with random errors. That some show a very significant lower than expected rotation therefore seems very much more likely to a a real effect and not the result of any systematic bias.

*Though all of these terms like "bias", "errors" and "inaccuracies" are, by the way, somewhat misleading. It's not that the authors did a bad job, it's that the data itself does not permit greater precision. That is, it allows for a range of inclination angles, some of which lead to more interesting results than others. The actual measurements performed are perfectly fine.

What about that original galaxy though ? AGC 11405 might itself still have had a measurement problem. Here the original authors return to redress the balance.

It seems that in the interim I missed one of their other observational papers which changes the estimates of exactly how much dark matter the galaxy should have; probably this is lost somewhere in my extensive reading list. The earlier simulation paper found that the object could be stable only (if at all) with a rather contrived, carefully fine-tuned configuration of dark matter, and there wasn't any reason to expect such a halo to form naturally. Couple that with the findings that it could easily be a normal galaxy if the inclination angle was just a bit off, and that made the idea of this particular object seem implausible, even if a population of other such objects did exist.

But that interim paper changes things. Whereas previously they used the gas of the object to estimate the inclination angle, now they got sufficiently sensitive optical data to measure it from the stars, and that confirms their original finding independently. They also improved their measurements of the kinematics from the gas, finding that it's rotating a bit more quickly than their original estimates, meaning it has a little bit more scope for dark matter. More significantly, the same correction found that the random motions are considerably higher than they first estimated.

What this means is that the dark matter halo can be a bit more massive than they first thought, and the disc of the galaxy doesn't have to be so thin. A thick disc with more random motions isn't so hard to keep stable because it's fine if things wander around a bit. So they do their own simulations to account for this, with the bulk of the paper given to describing (in considerable detail) the technicalities of how this was done.

They find that an object with these new parameters can indeed be stable. Rather satisfyingly, they also run simulations using the earlier parameters, as the other team already did independently. And they confirm that with that setup, the galaxy wouldn't be stable at all. So the modelling is likely sound, it's just that it depends quite strongly on the exact parameters of the galaxy. They confirm this still further with analytic formulae for estimating stability, showing that the new measurements of the rotation and dispersion are, once again, predicted to be stable.

But if the galaxy actually does have a hefty dark matter halo after all, doesn't that mean it's just like every other galaxy and therefore not interesting ? No. As far as I can tell, the amount of dark matter is still significantly less than expected, but also its concentration (essentially its density) is far lower : a 10 sigma outlier ! So yes, it's still really, really weird, with the implied distribution of dark matter still apparently very contrived and unnatural.

So how could such a galaxy form ? That's the fun part. It's important to remember that just because dark matter doesn't interact with normal matter except through gravity, this is not at all the same as saying it doesn't interact at all ! So some processes you'd think couldn't possible affect dark matter... probably can*. Like star formation, for instance. Young, massive stars tend to have strong winds and also like to explode, which can move huge amounts of gas around very rapidly. It's been suggested, quite plausibly, that this is what's responsible for destroying the central dark matter spikes which are predicted in simulations but don't seem to be the case in reality. The mass of the gas being removed wouldn't necessarily be enough to drag much dark matter along with it, but it could give it a sufficient yank to disrupt the central spike.

* And it's also worth remembering that just because dark matter dominates overall, this isn't at all true locally. This means that movement of the normal baryonic matter can't always be neglected. 

The problem for this explanation here is that the star formation density must be extremely low to get objects this faint. So whether there were ever enough explosively windy stars to have a significant effect isn't clear. Quantifying this would be difficult, especially because dwarf galaxies are much more dominated by their dark matter than normal galaxies – yes, they'd be more susceptible to the effects of massive stars because they're less massive overall, but the effect on the dark matter might not necessarily be so pronounced.

The authors here favour a more exotic and exciting interpretation : self interacting dark matter. The most common suggestion is self-annihilating dark matter that's its own anti-particle, which would naturally lead to those density spikes disappearing. There could be other forms of interaction that might also "thermalize" the spike... but of course, this is very speculative. It's an intriguing and important bit of speculation, to be sure : that we can use galaxies to infer knowledge of the properties of dark matter beyond its mere existence is a tantalising prospect ! But to properly answer this would take many more studies. It could well be correct, but I think right now we just don't have enough details of star formation to rule anything out. Continuing to establish the existence of this whole unuspected population of dark matter-deficient galaxies is enough, for now, to be its own reward.

Wednesday, 19 February 2025

Nobody Ram Pressure Strips A Dwarf !

Very attentive readers may remember a paper from 2022 claiming, with considerable and extensive justification, to have detected a new class of galaxian object : the ram pressure dwarf. These are similar to the much more well-known tidal dwarf galaxies, which form when gravitational encounters remove so much gas from galaxies that the stripped material condenses into a brand new object. Ram pressure dwarfs would be essentially similar, but result from ram pressure stripping instead of tidal encounters. A small but increasing number of objects in Virgo seem to fit the bill for this quite nicely, as they don't match the scaling relations for normal galaxies very well at all.

This makes today's paper, from 2024, a little late to the party. Here the authors are also claiming to have discovered a new class of object, which they call a, err... ram pressure dwarf. From simulations.

I can't very well report this one without putting my sarcastic hat on. So you discovered the same type of object but two years later and only in a simulation eh ? I see. And you didn't cite the earlier papers either ? Oh.

And I also have to point out an extremely blatant "note to self" that clearly got left in accidentally. On the very first page :

Among the ∼60 ram-pressure-stripped galaxies belonging to this sample, ionized gas powered by star formation has been detected (R: you can get ionized gas that is not a result of star formation as well, so maybe you could say how they have provided detailed information about the properties of the ionized gas, its dynamics, and star formation in the tails instead) in the tentacles.

No, that's not even the preprint. That's the full, final, published journal article !

Okay, that one made me giggle, and I sympathise. Actually I once couldn't be bothered to finish looking up the details of a reference so I put down "whatever the page numbers are" as a placeholder... but the typesetter fortunately picked up on this ! 

What does somewhat concern me at a (slightly) more serious level, though, is that this got through the publication process. Did the referee not notice this ? I seem to get picked up on routinely for the most minor points which frankly amount to no more than petty bitching, so it does feel a bit unfair when others aren't apparently having to endure the same level of scrutiny.

Right, sarcastic hat off. In a way, that this paper is a) late and b) only using simulations is advantageous. It seems that objects detected initially in observational data have been verified by theoretical studies fully independently of the original discoveries. That gives stronger confirmation that ram pressure dwarfs are indeed really a thing.

Mind you, I think everyone has long suspected in the back of their minds that ram pressure dwarfs could form. After all, why not ? If you remove enough gas, it stands to reason that sometimes part of it could become gravitationally self-bound. But it's only recently that we've had actual evidence that they exist, so having theoretical confirmation that they can form is important. That puts the interpretation of the observational data on much stronger footing.

Anyway, what the authors do here is to search one of the large, all-singing, all-dancing simulations for candidates where this would be likely. They begin by looking for so-called jellyfish galaxies, in which ram pressure is particularly strong so that the stripped gas forms distinct "tentacle" structures. They whittle down their sample to ensure they have no recent interactions with other galaxies, so that the gas loss should be purely due to ram pressure and not tidal encounters. Of the three galaxies in their sample which meet this criteria, they look for stellar and gaseous overdensities within their sample and find one good ram pressure dwarf candidate, which they present here.

By no means does this mean that such objects are rare. Their criteria for sample selection is deliberately strict so they can be extremely confident of what they've found. Quite likely there are many other candidates lurking in the data which they didn't find only because they had recent encounters with other galaxies, which would mean they weren't "purely" resulting from ram pressure. I use the quotes because determining which factor was mainly responsible for the gas loss can be extremely tricky. And simulation resolution limits mean there could be plenty of smaller candidates in there. The bottom line is that they've got only one candidate because they demand the quality of that candidate be truly outstanding, not because they're so rare as to be totally insignificant.

And that candidate does appear to be really excellent and irrefutable. It's a clear condensation of stars and gas at the end of the tentacle that survives for about a gigayear, with no sign of any tidal encounters being responsible for the gas stripping. It's got a total stellar mass of about ten million solar masses, about ten times as much gas, and no dark matter – the gas and stars are bound together by their own gravity alone. The only weird thing about it is the metallicity, which is extraordinarily large, but this appears to be an artifact of the simulations and doesn't indicate any fundamental problem.

In terms of the observational candidates, this one is similar in size but at least a hundred times more massive. Objects that small would, unfortunately, be simply unresolvable in the simulations because it doesn't have nearly enough particles. But this is consistent with this object being just the tip of a much more numerous iceberg of similar but smaller features. Dedicated higher resolution simulations might be able to make better comparisons with the observations, until someone finds a massive ram pressure dwarf in observational data.

I don't especially like this paper. It contains the phrase "it is important to note" no less than four times, it says "as mentioned previously" in relation to things never before mentioned, it describes the wrong panels in the figures, and it has many one-sentence "paragraphs" that make it feel like a BBC News article if the writer was unusually technically competent. But all of these quibbles are absolutely irrelevant to the science presented, which so far as I can tell is perfectly sound. As to the broader question of whether ram pressure dwarfs form a significant component of the galaxy population, and indeed how they manage to survive without dark matter in the hostile environment of a cluster... that will have to await further studies.

How To Starve Your Hedgehog

Today, two papers on hedgehogs quenched galaxies. It'll make more sense later on, but only slightly.

"Quenched" is just a bit of jargon meaning that galaxies have stopped forming stars, if not completely, then at least well below their usual level. There are a whole bunch of ways this can happen, but they all mostly relate to environment. Basically you need some mechanism to get the gas out of galaxies where it then disperses. In clusters this is especially easy because of ram pressure stripping, where the hot gas of the cluster itself can push out gas within galaxies. In smaller groups the main method would be tidal interactions, though this isn't as effective.

What about in isolation ? There things get tricky. Even the general field is not a totally empty environment : there are other galaxies present (just not very many) and external gas (just of very low density). But you also have to start to consider what might have happened to galaxies there over the whole of time, because conditions were radically different in the distant past.

To cut a long story short, what we find is that giant galaxies seem to have formed the bulk of their stars way back in a more exciting era when things were just getting started. Dwarf galaxies in the field, on the other hand, are still forming stars, and in fact their star formation rate has been more or less permanently constant.

This phenomena is called downsizing, and for a long time had everyone sorely puzzled : naively, giant galaxies ought to assemble more slowly, so were presumed to have taken longer to assemble their stellar population, whereas dwarfs should form more quickly. Simplifying, this was due to host of problems in the details of the physics of the models, and as far as I know it's generally all sorted out now. Small amounts of gas can, in fact, quite happily maintain a lower density for longer, hence dwarfs form stars more slowly but much more persistently.

Dwarfs are, of course, much more susceptible to environmental gas-loss removal processes than giants, and indeed dwarfs in clusters are mostly devoid of gas (except for recent arrivals). And so conversely, any dwarfs which have lost their gas in the field are unexpected, because there's nothing very much going on out there : all galaxies of about the same mass should have about the same level of star formation. There's no reason that some of them should have lost their gas and others held on to it - it should be an all-or-nothing affair.

That's why isolated quenched galaxies are interesting, then. On to the new results !


The first paper concentrates on a single example which they christen "Hedgehog", because "hedgehogs are small and solitary animals" and also presumably because "dw1322m2053" is boring, and cutesy acronyms are old hat. Wise people, I approve.

This particular hedgehog galaxy is quite nearby (2.4 Mpc) and extremely isolated, at least 1.7 Mpc from any nearby massive galaxies. That puts it at least four times further away than expected from the region of influence of any groups, based on their masses. It's a classic quenched galaxy, "red and dead", smooth and structureless, with no detectable star formation at all.

It's also very, very small. They estimate the stellar mass at around 100,000 solar masses, whereas for more typical dwarf galaxies you could add at least two or three zeros on to that. Now that does mean they can't say definitively if its lack of star formation is a really significant outlier, simply because for an object this small, you wouldn't expect much anyway. But in every respect it appears consistent with being a tiny quenched galaxy, so the chance that it has any significant level of star formation is remote.

How could this happen ? There are a few possibilities. While it's much further away from the massive groups than where you'd normally expect to see any effect from them, simulations have shown that it's just possible to get quenched galaxies this far out. But this is extraordinarily unlikely, given that they found this object serendipitously. They also expect these so-called "backsplash" galaxies (objects which have passed through a cluster and out the other side*) to be considerably larger than this one, because they would have formed stars for a prolonged time, right up until the point they fell into the cluster.

* I presume and hope this is a men's urinal reference.

Another option is simply that the star formation in small galaxies might be self-limiting, with stellar winds and supernovae able to eject the gas. This, they say, is only expected to be temporary (since most of the gas should fall back in after a while), so again the chances of finding something like this are pretty slim. But I'd have liked more details about this, since I would expect that for galaxies this small - and it really is tremendously small - the effects of feedback could be stronger than for more typical, more massive galaxies. Maybe stellar winds and explosions could permanently eject much more of the gas, although on the other hand galaxies this small would have fewer massive stars capable of this.

Similarly another possibility, which I don't think they mention, is quenching due to ram pressure in the field. Again, for normal dwarf galaxies, this is hardly a promising option. For ram pressure to work effectively, you need gas of reasonably high density and galaxies moving at significant speeds, neither of which happens in the field. But, studies have shown that galaxies in the field do experience (very) modest amounts of gas loss which correlates with the distance from the large-scale filaments. Ordinarily this is not really anything substantial, but for galaxies this small, it might be. Since a galaxy this small just won't have much gas to begin with, and removing it will be easy because it's such a lightweight, what would normally count as negligible gas loss might be fatal for a tiddler like this.

The most interesting option is reionisation. When the very first stars were formed, theory says, there were hardly any elements around except hydrogen and helium and a smattering of others. Heavier elements allow the gas to cool and therefore condense more efficiently, so today's stars are comparative minnows. But with none of this cooling possible, the earliest stars were monsters, perhaps thousands of times more massive than the Sun. They were so powerful that they reionised the gas throughout the Universe, heating it so that cooling was strongly suppressed, at least for a while. In more massive galaxies gravity eventually overcame this, but in the smallest galaxies it could be halted forever.

Hedgehog, the authors say, is right on the limit where quenching by reionisation is expected to be effective. If so then it's a probe of conditions in the very early universe, one which is extremely important as it's been used a lot to explain why we don't detect nearly as many dwarf galaxies as theory otherwise predicts*. The appealing thing about this explanation is the small size and mass of the object, which isn't predicted by other mechanisms.

* They do mention that the quenched fraction of galaxies in simulations rises considerably at lower masses, but how much of this is due to reionisation is unclear.

This galaxy isn't quite a singular example, but objects like this one are extremely rare. Of course ideally we'd need a larger sample, which is where the second paper comes in.


This one is a much more deliberate attempt to study quenched galaxies, though not necessarily isolated. What they're interested in is our old friends, Ultra Diffuse Galaxies, those surprisingly large but faint fluffy objects that often lack dark matter. In this paper the authors used optical spectroscopy to target a sample of 44 UDGs, not to measure their dynamics (the spectroscopic measurements are too imprecise for that) but to get their chemical composition. With this they can identify galaxies in a post-starburst phase, essentially just after star formation has stopped. That kind of sample should be ideal for identifying where and when galaxies get quenched.

I'm going to gloss over a lot of careful work they do to ensure their sample is useful and their measurements accurate. The sample size is necessarily small because UDGs are faint, and their own data finds that some of the distance estimates were wrong so a few candidates weren't actually UDGs after all. Their final result of 6 post-starburst UDGs doesn't sound much, and indeed it isn't, but these kinds of studies are still in their very early days and you have to start somewhere.

Even with the small size, they find two interesting results. First, the fraction of quenched UDGs is around 20%, much higher than the general field population. The stellar masses are a lot higher than the Hedgehog but still small compared to most dwarfs though, so this one needs to be treated with a bit of caution but it's definitely interesting. Second, while most quenched UDGs do appear to result from environmental effects, a few are indeed isolated. Which is a bit weird and unexpected. UDGs in clusters might form by gas loss of more "typical" galaxies, but this clearly can't work in the field, so why only a select few should lose gas isn't clear at all.


What all this points to isn't all that surprising, though in a somewhat perverse sense : it underscores that we don't fully understand the physics of star formation. The authors of the second study favour stellar feedback as being responsible for a temporary suppression of star formation. If this is common and repeated, with galaxies experiencing many periods of star formation interspersed with lulls, that could also make Hedgehog a bit less weird - if, say, it's forming/not forming stars for roughly the same total amount of time, then it wouldn't be so strange to detect it during a quenched phase. And of course the lower dark matter content of UDGs surely also has some role to play in this, although what that might be is anyone's guess.

As usual, more research is needed. At this point we just need more data, both observational and simulations. That we're still finding strange objects that're hard to explain isn't something to get pessimistic about though. We've learned a lot, but we're still figuring out just much further we have to go before we really understand these objects.

Monday, 17 February 2025

Sports Stars Can Save Humanity

I know, I know, I get far less than my proverbial five-a-day so far as reading papers goes. Let me try and make some small amends.

Today, a brief overview of a couple of visualisation papers I read while I was finishing off my own on FRELLED, plus a third which is somewhat tangentially related.


The first is a really comprehensive review of the state of astronomical visualisation tools in 2021. Okay, they say it isn't comprehensive, which is strictly speaking true, but that would be an outright impossible task. In terms of things at a product-level state, with useable interfaces, few bugs and plenty of documentation, this is probably as close as anyone can realistically get.

Why is a review needed ? Mainly because with the "digital tsunami" of data flooding our way, we need to know which tools already exist before we go about reinventing the wheel. As they say, there are data-rich but technique-poor astronomers and data-poor but technique-rich visualisation experts, so giving these groups a common frame of reference is a big help. And as they say, "science not communicated is science not done". The same is true for science ignored as well, of which I'm extremely guilty... you can see from the appallingly-low frequency of posts here how little time I manage to find for reading papers. 

So yeah, having everything all together in one place makes things very much easier. They suggest a dedicated keyword in papers "astrovis" to make everything easier to find. As far as I know this hasn't been adopted anywhere, but it's a good idea all the same.

Most of the paper is given to summarising the capabilities of assorted pieces of software, some of which I still need to check out properly (and yes, they include mine, so big brownie points to them for that !). But they've also thought very carefully about how to organise all this into a coherent whole. For them there are five basic categories for their selected tools : data wrangling (turning data into something suitable for general visualisation), exploration, feature identification, object reconstruction, and outreach. They also cover the lower-level capabilities (e.g. graph plotting, uncertainty visualisation, 2D/3D, interactivity) without getting bogged-down in unproductively pigeon-holing everything. 

Perhaps the best bit of pigeon-unholing is something they quote from another paper : the concept of explornation, an ugly but useful word meaning the combination of exploration and explanation. This, I think, has value. It's possible to do both independently, to go out looking at stuff and never getting any understanding of it at all, or conversely to try and interpret raw numerical data without ever actually looking at it. But how much more powerful is the combination ! Seeing can indeed be believing. The need for good visualisation tools is not only about making pretty pictures (although that is a perfectly worthwhile end in itself) but also in helping us understand and interpret data in different ways, every bit as much as developing new techniques for raw quantification. 

I also like the way they arrange things here because we too often tend to ignore tools developed for different purposes other than our own field of interest. And they're extraordinarily non-judgemental, both about individual tools and different techniques. From personal experience it's often difficult to remain so aloof, to avoid saying, "and we should all do it this way because it's just better". Occasionally this is true, but usually what's good for one person or research topic just isn't useful at all for others.

On the "person" front I also have to mention that people really do have radically different preferences for what they want out of their software. Some, inexplicably, genuinely want everything to do be done via text and code and nothing else, with only the end result being shown graphically. Far more, I suspect, don't like this. We want to do everything interactively, only using code when we need to do something unusual that has to be carefully customised. And for a long time astronomy tools have been dominated too much by the interface-free variety. The more that's done to invert the situation, the better, so far as I'm concerned.


The second paper presents a very unusual overlap between the world of astronomy and... professional athletes. I must admit this one languished in my reading list for quite a while because I didn't really understand what it was about from a quick glance at the abstract or text, mostly because of my own preconceptions : I was expecting it to be about evaluating the relative performance of different people at source-finding. Actually this is (almost) only tangential to the main thrust of the paper, though it's my own fault for misreading what they wrote.

Anyway, professional sports people train themselves and others by reviewing their behaviour using dedicated software tools. One of the relatively simple features that one of these (imaginatively named "SPORTSCODE") has is the ability to annotate videos. This means that those in training can go back over past events and see relevant features, e.g. an expert can point out exactly what and where something of interest happened – and thereby, one hopes, improve their own performance.

What the authors investigate is whether astronomers can use this same technique, even using the same code, to accomplish the same thing. If an expert marks on the position of a faint source in a data cube, can a non-expert go back and gain insight into how they made that identification ? Or indeed if they mark something they think is spurious, will that help train new observers ? The need for this, they say, is that ever-larger data volumes threaten to make training more difficult, so having some clear strategy for how to proceed would be nice. They also note that medical data, where the stakes are much, much higher, relies on visual extraction, while astronomical algorithms have traditionally been... not great. "Running different source finders on the same data set rarely generates the same set of candidates... at present, humans have pattern recognition and feature identification skills that exceed those of any automated approach."

Indeed. This is a sentiment I fully endorse, and I would advocate using as much visual extraction as possible. Nevertheless, my own tests have found that more modern software can approach visual performance in some limited cases, but a full write-up on that is awaiting the referee's verdict.

While this paper asks all the right questions, it presents only limited answers. I agree that it's an interesting question as to whether source finding is a largely inherent or learned (teachable) skill, but most of the paper is about the modifications they made to SPORTSCODE and its setup to make this useful. The actual result is a bit obvious : yes indeed, annotating features is useful for training, and subjectively this feels like a helpful thing to do. I mean... well yeah, but why would you expect it to be otherwise ? 

I was hoping for some actual quantification of how users perform before and after training – to my knowledge nobody has ever done this for astronomy. We muddle through training users as best we can, but we don't quantify which technique works best. That I would have found a lot more interesting. As it is, it's an interesting proof of concept, and it asks all the right questions, but the potential follow-up is obvious and likely much more interesting and productive. I also have to point out that FRELLED comes with all the tools they use for their training methods, without having to hack any professional athletes (or their code) to get them to impart their pedagogical secrets.


The final paper ties back into the question of whether humans can really outperform algorithms. I suppose I should note that these algorithms are indeed truly algorithms in the traditional, linear, procedural sense, and nothing at all to do with LLMs and the like (which are simply no good at source finding). What they try to do here is use the popular SoFiA extractor in combination with a convolutional neural network. SoFiA is a traditional algorithm, which for bright sources can give extremely reliable and complete catalogues, but it doesn't do so well for fainter sources. So to go deeper, the usual approach is to use a human to vet its initial catalogues to reject all the likely-spurious identifications.

The authors don't try to replace SoFiA with a neural network. Instead they use the network to replace this human vetting stage. Don't ask me how neural networks work but apparently they do. I have to say that while I think this is a clever and worthwhile idea, the paper itself leaves me with several key questions. Their definition of signal to noise appears contradictory, making it hard to know exactly how well they've done : it isn't clear to me if they're really used the integrated S/N (as they claim) or the peak S/N (as per their definition). The two numbers mean very different things. It doesn't help that the text is replete with superlatives, which did annoy me quite a bit.

The end result is clear enough though, at least at a qualitative level : this method definitely helps, but not as much as visual inspection. It's interesting to me that they say this can fundamentally only approach but not surpass humans. I would expect that a neural network could be trained on data containing (artificial) sources so faint a human wouldn't spot them, but knowing they were there, the program could be told when it found them and thereby learn their key features. If this isn't the case, then it's possible we've already hit a fundamental limit, that when humans start to dig into the noise, they're doing about as well as it's ever possible to do by any method. When you get to the faintest features we can find, there simply aren't any clear traits that distinguish signal from noise. Actually improving, in any significant way, on human vision, might be a matter of a radically different approach... but it might even be an  altogether hopeless challenge.

And that's nice, isn't it ? Cometh the robot uprising, we shall make ourselves useful by doing astronomical source-finding under the gentle tutelage of elite footballers. 

Or not, because that algorithms can be thousands of times faster can more than offset their lower reliability levels, but that's another story.

Phew ! Three papers down, several hundred more to go.

Saturday, 11 January 2025

Turns out it really was a death ray after all

Well, maybe.

Today, not a paper but an engineering report. Eh ? This is obviously not my speciality at all, in any way shape or form. In fact reading this only revealed to me even further the tremendous depths of my own ignorance regarding materials science and engineering practises. The former is something I never cared for at undergraduate level and the latter is something about which I know literally nothing. Naturally, I wouldn't normally even glance at a report like this, except that it's about a topic that's personally important to me : why Arecibo collapsed.

There's an okay-but-short press release version here. It's interesting to see the extent of the deconstruction at the site, which was already well advanced in 2021; I couldn't find a more recent photo. Otherwise the Gizmodo version is the 30-second read and not much else. For this post I read most of the full 113 page report, which really is "jaw dropping", at least in parts, as Gizmodo described it. Unsurprisingly there are fairly hefty tracts where my eyes glazed over, but there's still plenty in here that's accessible and understandable to non-engineers like me.

In a nutshell, Arecibo collapsed due to a combination of factors, two of which are predictable enough but the third is something nobody expected. The first two are inadequate maintenance and the impact of hurricane Maria. But it's important not to oversimplify, as these are intimately bound with the third : the effects of the radar transmitter. This is not quite a case where one can simply say, "if they'd just done their jobs properly then it'd still be standing today", thought the report does contain some damming stuff.

Going through this linearly would end up being a shorter version of the report, which wouldn't really help anyone. If you want that level of detail you should go through it yourself; it's thorough to the point of going back to hand-written notes from the earliest days of the telescope. I have to say, though, that it's also highly repetitive in parts and in my view somewhat self-contradictory in places – but as it says, this is a preprint and still subject to editorial revision. Anyway, rather than doing a blow-by-blow breakdown, let me extract some broader lessons here.


Safety is not the same as redundancy

Probably the most general lesson is, I presume, obvious to anyone with an engineering background. But to an outsider me the distinction between safety and redundancy was interesting just because it makes a lot of intuitive sense but I'd never heard of it before. Safety, apparently, refers to the breaking point of any particular element. For example a cable with a safety factor of two could support twice as much as its current load before it would snap. Redundancy, on the other hand, is about how many elements could fail before the whole structure would come crashing down. Arecibo's three towers, they say, don't provide redundancy because a single failure would inevitably mean a total collapse (compare with the six of FAST).

Of course it's very unlikely that even a single tower would ever fail because their safety factor was massive, so redundancy there was unnecessary (at least regarding any failure of the concrete towers themselves). The same can't be said for the metal cables, where the safety factor generally seems to have been about a factor two or a bit less, in accordance with standard design practises – still plenty, but with a need for redundancy just in case. The report stops short of saying that there were any actual design flaws in the telescope, but does not that it obviously would have been better if there had been more towers. Safety factors, they say, were not the issue, although I think I detect some inconsistency here. Where they do issue an outright criticism, for example, is that while the original cable system had redundancy, this was no longer true with the 1997 upgrade that added the 900-tonne Gregorian dome and altered the cable system. Which is a little bit in contradiction to their claims that the telescope didn't suffer from design faults. It's a bit muddled.


Poor maintenance contributed to but did not cause the collapse

At least this is the overall gist I got. There's plenty of criticism levelled here, but it's hard to disentangle how serious the maintenance problems really were. As I read it, a more diligent maintenance program probably could have prevented the collapse, but this is partly with the benefit of hindsight – the failures which occurred were unprecedented (see below) but should have been spotted all the same. Of particular concern is that there wasn't enough knowledge transfer during the telescope's two changes of management (I'll speak from first-hand experience in declaring that management changes should be avoided for a host of other reasons; I went through one such change at Arecibo and God knows what the staff must have felt like when a second took place only a few years later). In addition, and probably worst of all, is that the post-Hurricane Maria repair efforts were both much too slow, taking months to even get started and would have lasted for years, and targeting a cable which never failed. Major repairs needed to happen far sooner but there was also a need to identify the failures more accurately.

The failures were of the cable sockets rather than the cables themselves. In these "spelter sockets", there's normally some degree of cable pullout after construction is complete and the structure assumes its full load : these sockets are widely used so this is known to be absolutely normal and no cause for concern. But the report is somewhat ambiguous as to whether the extra pullout which happened could have been noticed. Sometimes it sounds quite damning in describing the extra movement as "clear" but elsewhere describes it as "not accessible by visual inspection". The amount of movement we're talking about, until the point of the collapse itself, was small, of the order 1cm or so. It certainly isn't something you'd spot from a casual glance, but you could measure it by hand easily enough with a ruler. Not noticing this, if I understand things correctly, meant that the cables were estimated to still have their original high safety factors whereas in fact they were much lower. They say this "should have raised the highest alarm level, requiring urgent action". Perhaps most damming, they also say that it is "highly unlikely" that this excessive pullout went unnoticed. They also note that there was a lack of good documentation of maintenance records and procedures.

The contribution of the recent hurricanes, especially Maria, was extremely significant in precipitating the collapse. In fact, "absent Maria, the Committee believes the telescope would still be standing today". Pullout from the sockets shortly after installation is entirely normal as the structure takes the weight, but after that, any further movement isn't normal at all. This did in fact happen, and should have been spotted – but even this, as we'll see, apparently wasn't enough to bring down the telescope by itself.

One final point is that Arecibo wasn't well liked by backend management. I often had the impression of a behind-the-scenes mood of delenda est Arecibo  or at the very least, that that was what some staff members sincerely believed was happening even if it wasn't true. The report notes that a 2006 NSF report recommended closing Arecibo by 2011 if other funding sources couldn't be found, which I found truly bizarre. This was less than ten years after a major upgrade and exactly at the point the biggest surveys were just beginning. As to why anyone would think that closing it at that particular moment was a good idea, I'm truly at a loss. Nothing about it, even with some familiarity of the large-than-life politics behind the place, has ever made a lick of sense to me. 

This is not, I hasten to add, any suggestion of deliberately shoddy maintenance; inasmuch as that was inadequate, there is no need to attribute that to anything besides an incompetently low budget. One strikingly simple recommendation in the report is that funding sources for site operations (e.g. science and development) and maintenance be entirely separate, so there is no chance of any conflict of interest or competition for resources which are essential to both.


The failure was unprecedented

The final and most interesting point of the report, the big headline message, is that Arecibo may have failed because of its radar transmitter. The report is emphatic, and repeats almost ad nauseum, that the kind of socket failures seen here have never before occurred in a century of operations of identical sockets used in bridges and other structures around the world. The damage from the hurricanes was significant, but not enough by itself to explain the failure. There is a crucial missing factor here.

The explanation suggested in the report is electroplasticity. In laboratory conditions, material creep (stretching) can be induced by electrical currents, apparently directly because of the energy released by the flow of electrons. As they note, in the lab this has been found under much higher currents operating for much shorter times, but could presumably work at lower currents if sustained for much longer periods. If correct, this would be Arecibo's final first, another effect of its unique nature. Such currents, they hypothesise, would have been induced by the powerful 1MW radar transmitter used for zapping asteroids and other Solar System objects. This would explain why the cables failed while still having apparently high safety factors, and possibly account for why the failures occurred in some of the youngest cables with no evidence of manufacturing defects (and weren't even the ones with the highest load). It would also, of course, explain why no such other socket failures have ever been seen. Hardly anything else has this combination of radar transmitter and spelter sockets, let alone in tropical conditions in an earthquake zone.

The report goes quite deep into the technical details of electroplasticity. Interestingly, it notes that even less powerful sources can induce currents in human skin that can be directly sensed a few hundred feet from the transmitter. The problem is that understanding the effects of these currently requires highly detailed simulations accounting for the complicated structure of Arecibo's cables, the exact path the current would follow, and using data on low, long-term currents that at present doesn't exist. The most obvious deficiency seems to me that they don't estimate just how long the radar was ever transmitting for. Sure, it was up there for decades, but it wasn't used routinely : regularly, to be sure, but not daily. This is something where a crude estimate should be relatively easy by searching the observing records; even the schedule of what was planned (which didn't always match what was actually done, usually because the err, radar broke) would give a rough indication.




If the report is correct, then there's little need for concern about other structures. The report strongly disagrees that Arecibo points to a need to revise safety standards for spelter sockets more generally; unless your bridge is in the path of 1MW S-band radar transmitter, you can carry on with your morning commute as usual. Well, that's good. Clearly, regardless of electroplasticity, something happened here that was truly exceptional, and not worth worrying about whether it will ever be repeated. Not unless you're an engineer, at any rate.

Whether electroplasticity really was the cause I'm not qualified to judge. Talking to someone older and wiser, the opinion was "they had to come up with something". I don't disagree with that – there just isn't enough data here to say anything for certain. It could be electroplasticity, or it could be something the committee just didn't think of. More analysis of the surviving hardware, along with more studies and simulations, is badly needed.

The broader lesson I would take from all this is that you can run things on a shoestring for a while, but you can't keep trying to do less with more indefinitely. Yes, I'm coloured by my political biases, but austerity to survive a short-term hit is very different to austerity as a way of life : one is manageable, the other isn't. Such a policy does far more harm than good. Yes, you save a little money immediately, but you ultimately lose an awful lot more a little further down the line. So if you're going to fund things, fund things as properly as you can. Incorporate redundancy thinking into managerial practises as well as engineering standards. Have teams large enough to survive the loss of several members. Hire separate observing support staff rather than expecting scientists to do everything. 

Finally, don't expect people to work for meagre compensation (and I'm here not thinking just financially but in other benefits, for example high pay is useless with long hours and/or low holiday time) just because they enjoy their job. Not even the most wildly enthusiastic, energy-driven fanatic can operate at 110% for long. Just because someone is uncomfortable doesn't mean they're working extra-hard. Part of America's puritan hangover appears to be in thinking that work = bad => people who are suffering are good workers. In the end this just leads to everyone hating their job are wanting to overthrow the system but having no clue what to replace it with. Far better to reverse the thinking and presume that those who are happy and comfortable are the best workers. 

This has taken a rather political turn, but it's not unmotivated by my experiences at Arecibo. One notorious manager was definitely of the ilk who believe that more work good, less work bad. Thankfully this is not a mentality I've encountered much in Europe. And, in my view, understanding this isn't just good for us as people, but actually as scientists in getting the work done we want to do. By all means, take a liberal approach : let those who want to work obsessively, who actively thrive because of it, do so, but don't presume the same conditions produce the same results from different people. They don't. As with good software interface design, in the end, solving these issues is just as important for the science we want to do as the scientific problems themselves. Soft issues produce hard results.

The Bunny Rabbit of Death

Today's paper is a bit more technical than usual, but sometimes you've gotta tackle the hard stuff. Ram pressure stripping is some...