Sister blog of Physicists of the Caribbean. Shorter, more focused posts specialising in astronomy and data visualisation.

Thursday 24 June 2021

El Fatso The Less Magnificent

Back in March we had yet another apparently insurmountable challenge to the standard model of cosmology, and this paper from May claims that the problems have once again been surmounted after all.

To recap, last time a paper by Asencio et al. claimed that the El Gordo galaxy cluster is just too damn fat (i.e. massive) to exist. Or rather, according to the standard model of cosmology it shouldn't be possible to assemble such a gigantic behemoth in so short a time, and the collision velocity of its merging sub-clusters is too dang high. The authors looked at a huge simulation suite and found that no such objects should be formed at all, which is a pretty damning result if you take it at face value.

While the claims were made with a somewhat... robust level of conviction, the basic idea seemed reasonable enough to me. My main concern was whether the mass might somehow have been overestimated, since the frequency of such objects is very strongly mass-dependent. There's also the matter of the small area of the survey in which it was found. This means we have no clue whether El Gordo is a hideous bloated freak we can cheerfully ignore, or a representative of a much more interesting, widespread problem that we ought to confront.

Like the Bullet Cluster before it, this paper by Kim et al. circumvents the problem by saying that Asencio et al. were looking for the wrong object. They use new Hubble data to get gravitational lensing estimates of the mass, combined with simulations to figure out the most probable collision velocity whilst accounting for "radio relics" that were previously ignored. The bottom line is they say there's no serious conflict with the standard model after all.

So who's right ? 

Difficult to say. I'm not expert in the lensing techniques they consider, so this was a tough read for me. Fortunately most of the rest is easier, and there are some interesting and very stark contradictions with the Asencio paper.

First, their new measurements decrease the mass by a modest but significant 20% or so compared to previous estimates (50% compared to the value Asencio used). This, they say, is because the previous results had to extrapolate out to the full size of the cluster, whereas their own data covers a larger area so this isn't necessary. Well, maybe, but their figure shows the mass of the cluster continues to increase out to their observed radius limit, and shows no signs of reaching a plateau, so I'm not sure why they're so confident about their new value. 

And anyway this decrease isn't enough to bring it into compatibility with the masses Asencio found in their simulation. Here they appear to be in almost direct disagreement : Kim say the chance of such a cluster existing at such a distance is about 10%, whereas Asencio said it was close to zero. Even given the relatively small initial survey volume, Kim say this isn't surprising that such a monster was found, owing to the observational uncertainties on both the cluster properties and the uncertainties in the cosmological parameter values. But why there's such a stark difference in these claims is very unclear.

What about the collision velocity ? Here it gets worse. Whereas Asencio searched for clusters colliding at an enormous speed of > 2,500 km/s, Kim say the true velocity is likely to be closer to 450 km/s, which would certainly pose no difficulty whatever for the standard model. But their conclusion here is frustratingly brief. They used some simple models to find the general parameters, then ran a hydrodynamic code (i.e. something very sophisticated) to verify it. But do we get to see this fancy simulation ? Heck no. And their description of what they did is frankly confusing, apparently deliberately excluding cases they consider unphysical and then coming up with a velocity they had already pre-excluded !

What I think they're trying to say, which may offer a way out of this mess, is that previous authors began with high infall velocities and/or started with the two cluster components too close together. Kim et al. seem to be saying that actually the two subclusters started off both further away and less massive, so their initial infall velocity was much smaller. Presumably, as they approach each other, they accumulate other background galaxies and grow in mass, eventually reaching a higher velocity for the collision itself (which therefore poses no challenge to physics : more mass => higher velocity). Hence Asencio et al.'s statistics are all correct, but they were looking for the wrong sort of progenitor objects. El Gordo's parents weren't necessarily all that big or fast when they first started their doomed embrace - their romance started gradually, only reaching a frenzy at the final climax.

Ahem. Anyway, my impression is that this would have been considerably better as two papers. Most of this one is about the lensing measurements, with the simulation stuff being too tacked on and confusing. It would have been nice to have a much more rigorous examination of the Asencio result, e.g. how rare is El Gordo itself (rather than its parents) according to their simulations ?

My guess is that something like this analysis of Kim et al. will win out eventually : El Gordo will turn out to be an interesting beast, but not the CDM-slaying monster it's purported to be. But for now, Kim's result is just too unclear to be all that compelling. I suspect I'm missing something.

Head-to-Toe Measurement of El Gordo

We present an improved weak-lensing (WL) study of the high −z (z=0.87) merging galaxy cluster ACT-CL J0102-4915 (El Gordo) based on new wide-field Hubble Space Telescope (HST) imaging data. The new imaging data cover the 3.5 × 3.5 Mpc region centered on the cluster and enable us to detect WL signals beyond the virial radius, which was not possible in previous studies. Our updated mass is a more direct measurement since we are not extrapolating to R200 as in all previous studies. The new mass is compatible with the current ΛCDM cosmology.

Monday 7 June 2021

Full stack

One of the major difficulties with observing atomic hydrogen is that the emission is very weak. In the very nearby Universe, say within the Local Group, this isn't a big limitation. But just a few tens of millions of light years away and it starts to demand gigantic telescopes and/or obscene amounts of integration time. Beyond about a billion light years it's nigh-on impossible.

This is a problem. In optical wavelengths we can see how star formation evolves throughout the ~14 billion year history of the Universe, and it evolves strongly. Galaxies today are but dim embers compared to the blazing fires they were a few billion years ago. But because of the weakness of the HI line, how that star formation's fuel supply has changed has remained largely a mystery. So we're missing a key part in the story of how galaxies grow up.

There have been a handful of attempts to go deeper. Short of building a gargantuan, all-crushing telescope, one solution is to stack observations of lots of galaxies together, getting you the equivalent of hundreds of hours of integration time. Thus far, none of these efforts have ever really looked terribly convincing in my opinion. You look at the spectra and go, "eeehhh, I mean, I've seen worse, but.... really ?".

This paper changes that. Actually the authors already have a similar paper, which I overlooked because I've fallen for a boy-who-cried-wolf fallacy and stopped reading such claims. So it was a pleasant surprise to see that the detection they present here is pretty unambiguously convincing, as indeed was that in their previous paper.

Using about 400 hours of integration on the GMRT in India, they average together almost 3,000 galaxies. Because it's averaging, all they retrieve is information on the average galaxy in the sample, which is the major downside of stacking (the alternative would have been to do a 400 hour integration on a single galaxy, but this would arguably be worse and certainly riskier, there being no guarantee that any individual galaxy contains a detectable amount of hydrogen). Even so, the results are much more interesting than I expected.

Most of the paper is understandably given to the observational details, but their main result is an average hydrogen mass of about 30 billion solar masses. There are a few galaxies known in the local Universe with masses this large, but not many. More interestingly, the gas to stellar mass ratio at this distance (about 8 billion years ago) is markedly different to what we see today. Nearby bright galaxies tend to have substantially less mass in gas than in stars, but these objects have ratios well above one. So there has, as expected, been a definite evolution in the gas fraction over time. That's not surprising, but it's important that we have concrete evidence now rather than mere hypothesising. The big, bright galaxies we see today were not the same earlier in their evolution.

(Though, as an interesting caveat, this might be a slightly misleading selection effect. We wouldn't expect today's big bright galaxies to have been so big and bright back in the past, because they need that time to build up their stellar content. But this should only be a modest effect, as the main result that bright galaxies today are different to bright galaxies in the distant past is still interesting.)

One of the other puzzles is where the gas is coming from. It's easy to see how it gets consumed in star formation and lost via stripping processes. My understanding was that while star formation has remained fairly constant in the recent past, the gas consumption timescale is only 1-2 Gyr, suggesting it's being replenished somehow (in particular, there was a lot of discussion about whether clouds seen around the Milky Way could be fuelling this, with the conclusion being quite clear that they could not). I dunno if I just missed some big development here, but they say the depletion timescale in the local Universe is more like 5-10 Gyr, meaning there's no mystery since the gas is being used very slowly, whereas at the greater distances it's only ~2 Gyr - so they're witnessing them at the peak of consumption. As they start to run out of gas, star formation activity should naturally drop as the gas density decreases, so it's no mystery that there's still gas around today for slower consumption. This is all nicely consistent, so I'm a bit puzzled why earlier results seem to have given such a different estimate for the local consumption rate.

Anyway, it's a very nice piece of work. An obvious follow-up would be to try a very deep integration on a single galaxy. The problem with stacking is that you wash out a lot of valuable information, so confirming the results on a single galaxy - which would also let you measure the kinematics - could be an interesting compliment. Though just try getting "let's stare at this one galaxy for 400 hours and hope we get an interesting wiggly line as a result" past the proposal committee...


Giant Metrewave Radio Telescope Detection of HI 21 cm Emission from Star-forming Galaxies at z=1.3

We report a 400-hour Giant Metrewave Radio Telescope (GMRT) search for HI 21 cm emission from star-forming galaxies at z=1.18−1.39 in seven fields of the DEEP2 Galaxy Survey. Including data from an earlier 60-hour GMRT observing run, we co-added the HI 21 cm emission signals from 2,841 blue star-forming galaxies that lie within the full-width at half-maximum of the GMRT primary beam. This yielded a 5.0σ detection of the average HI 21 cm signal from the 2,841 galaxies at an average redshift ⟨z⟩≈1.3, only the second detection of HI 21 cm emission at z≥1

Back from the grave ?

I'd thought that the controversy over NGC 1052-DF2 and DF4 was at least partly settled by now, but this paper would have you believe ot...