Sister blog of Physicists of the Caribbean. Shorter, more focused posts specialising in astronomy and data visualisation.

Thursday 25 July 2019

Being edgy for the sake of it

There are no less than three papers about Ultra Diffuse Galaxies on astroph today. The one I'm most interested in is an attempt to find UDGs which are edge-on with HI. That means the HI line width will give a very accurate indication of their true rotation speed (and thus total mass) without needing to do any inclination corrections, which may have irritating selection effects. In other words, this could give a good indication of whether these things are massive galaxies with absurdly low star formation rates, or just very extended dwarf galaxies. This is the central mystery and controversy of UDGs.

The selection effect at work with inclination is a little complicated, but it's worth being aware of. Suppose that we have a galaxy with a given mass of hydrogen. The signal to noise of the hydrogen signal will be given by the mass, rotation speed, and viewing angle. If the viewing angle is face-on, then the S/N will be highest because all the gas will be moving at the same velocity relative to us. If it's edge-on, then the S/N will be lowest because the galaxy is rotating, spreading out the hydrogen flux into multiple frequencies. So we will preferentially detect face-on galaxies if we look for them using hydrogen.

For optical wavelengths it should be the opposite. If they're face on, then their optical light is spread out over the largest possible area, minimising the S/N. If they're edge-on, then we're seeing all their stars compressed into the smallest area, maximising the S/N. So edge-on galaxies should be easy to detect in the optical but difficult using hydrogen. In reality it will be a bit more complicated than that : a large, faint thing may be easier (in some circumstances) to detect than a small faint thing, but as a general guideline it's reasonable.

The other complication is that surface brightness (i.e. no. stars per unit area on the sky) has to be corrected. You can't just measure it directly off the sky unless the galaxy is face-on. So correcting this when the galaxy is edge-on, and has all sorts of internal extinction effects (e.g. absorption by dust and gas), is tricky - but possible.

The first attempt to find UDGs in an HI catalogue looked at isolated objects and didn't particularly concern itself with viewing angles. These authors take the exact opposite approach, but still find that their galaxies are primarily in low density environments. That's not too surprising, since in a higher density environments there are effects which can trigger star formation that consumes the gas and/or directly remove the gas.

What I don't understand is their selection criteria. For the optical data they use SDSS DR7, but we're now on DR15. For the HI data they use the ALFALFA 40% complete catalogue, but the 100% complete has been out for months and the 70% one for ages and ages. Then - I think - they do surface brightness profile measurements to find objects with similar shapes to UDGs, although this isn't clear to me. Even the 40% catalogue has over 10,000 members so doing this in an automatic way is tough. So I don't understand how, but somehow they reduce this down to a catalogue off 11 candidate edge-on UDGs, after correcting the surface brightness for inclination. I found this section to be a bit odd and not at all clear as to what they did or how and why they did it.

Regardless, they find that their galaxies look relatively normal. They're very blue, edge-on discs, but nothing extraordinary. That suggests to me that there could be large numbers still undetected. They are, however, extremely gas rich, with HI mass-to-light ratios of between 5 and 30. So their mass really is dominated by gas, strongly suggestive of low star formation efficiency. Their most interesting plot shows that in color-magnitude space, UDGs found in isolation and those found in groups occupy distinctly different regions of parameter space, consistent with the idea that they could be formed by different mechanisms.

What about the all important dynamics ? Amazingly they say little about this - I don't know why people keep doing that. But their HI spectra look quite normal : broad velocity widths with double-horn shapes. Line widths are not extraordinarily high though, with a maximum of 150 km/s. That's in the upper range for dwarfs, but definitely not giants. I did a quick inspection of the Tully-Fisher relation, and their line widths and gas masses are, unlike some other UDGs, consistent with normal galaxies. That lends some credence to the idea that UDGs with unexpectedly low line widths are the result of a selection effect and their inclination angle hasn't been corrected properly. On the other hand, since they were here looking for edge-on galaxies, then they might have excluded those with low line widths by definition.

Overall, the general view that UDGs are mostly dwarves looks to be consolidating its position. It's still too early to say if this is true for all of them though, and the difference between UDGs in low and high density environments has yet to be explained.

Edge-on HI-bearing ultra diffuse galaxy candidates in the 40% ALFALFA catalog

Ultra-diffuse galaxies (UDGs) are objects which have very extended morphology and faint central surface brightness. Most UDGs are discovered in galaxy clusters and groups, but also some are found in low density environments. The diffuse morphology and faint surface brightness make them difficult to distinguish from the sky background.

Thursday 18 July 2019

The complexities of the Kent complex

I'll do a proper outreachy post with memes and stuff about this soon. For now, a brief summary.

AGES was such a fun survey we decided we didn't want to stop, so in effect we've kept going. The new survey, WAVES (Wide-field Arecibo Virgo Extragalactic Survey) will only cover the Virgo cluster, but with the ultimate goal of giving full cluster coverage to the AGES sensitivity level or nearly so. That will, amongst many other things, let us work out how common all those optically dark clouds really are, where they're found etc., and so give much better constraints on what the the little blighters actually are.

But there's much more to Virgo than poxy little clouds. For one thing, there's the massive Kent complex, the main target of the first WAVES field. This was discovered using the shallower ALFALFA survey and had some follow-up with the VLA. As we all know, interferometers just aren't as sensitive as single-dish telescopes. So since this is a big, extended feature, surveying it with Arecibo but to a higher sensitivity level (a factor of four) seemed like a good idea. And it was.

The Kent complex has a unique set of properties. It's 150 kpc in projected length, though there are other clouds known which are longer. It's got a gas mass comparable to a large galaxy, but other clouds of similar masses are also known. It's got complex kinematics, but so do others. It doesn't have an obvious parent galaxy, but neither do other clouds. But none of the other features have all of these properties - in Virgo at least, this is unique.

Our observations show that it's probably weirder than Kent gave it credit for. They show that the mass is more than a billion times the Sun, more than doubling the value from Kent. In our data it looks for all the world like a collection of galaxies, except that there's nothing visible at optical wavelengths at all. Importantly, Kent's VLA data rules out the dark galaxy hypothesis because there's no sign of ordered motions either of the complex as a whole or within individual clumps. We also found that much of the gas is in diffuse, extended emission, as well as in the discrete clouds that Kent found. We show that these clouds are indeed directly connected to each other, embedded in a sort of common envelope. And the whole thing looks a lot like a rhino.

How does this change how we interpret the cloud ? Kent leaned towards either NGC 4445 or NGC 4424, both reasonably nearby spiral galaxies, as the possible parents. Either ram pressure or tidal harassment (or both) could potentially have removed the gas. But honestly, neither of these candidates are great. They don't line up with the major axis of the cloud at all - in fact, they're basically orthogonal to it. That doesn't match any other similar features, and the fact that they're completely detached is also weird : ram pressure doesn't seem to do that, and removing this much has through tides is extremly difficult. We considered a scenario where one of the galaxy lost all its gas very rapidly, which then formed an expanding cloud, but while that might work (actually we tried quite a few models which didn't make it into the paper - we might do a separate paper in future), it's hard to physically justify. And NGC 4424 has a tail which points in exactly the wrong direction, so it's almost certainly not that one.

There's another galaxy in this region which is one of the iconic images for ram pressure stripping : NGC 4522. This is already known from VLA data to have a short tail pointing towards the Kent complex. We found it has a second tail, of similar alignment, several times longer, but at a different velocity. The direction alignment towards the complex is awfully tempting... but the velocity of the galaxy is 1,800 km/s different to the cloud. Even in a cluster, where galaxies swarm like angry bees, that velocity difference is just too high for the two features to be associated.

The final weird thing is just how massive the Kent complex is. In almost all galaxies with hydrogen streams, the mass in the tail is only a few percent or so of the mass in the disc. For reasons we don't fully understand, the gas appears to rapidly evaporate and/or disperse once removed from its host, likely because of the very hot intracluster medium. The problem is that for any of the plausible donor galaxies, hardly any gas at all would have evaporated. And there's no obvious reason at all why the Kent complex should retain almost all of its gas as detectable atomic hydrogen whereas in all other cases the majority of it becomes undetectable.

So what the hell is it ? We don't know. NGC 4445 remains the least worst candidate parent galaxy, but we mean that literally : it's only the best of a bad bunch, and not at all convincing. As for how the cloud formed, we just don't know.

The Widefield Arecibo Virgo Extragalactic Survey I: New structures in the Kent cloud complex and an extended tail on NGC 4522

We are carrying out a sensitive blind survey for neutral hydrogen (HI) in the Virgo cluster and report here on the first 5° x 1° area covered, which includes two optically-dark gas features: a five-cloud HI complex (Kent et al. 2007, 2009) and the stripped tail of NGC 4522 (Kenney et al.

IAU Symposium 355 : The Realm of the Low Surface Brightness Universe

I've decided to do everyone a favour and not combine the science and pretty pictures of landscapes from the latest conference. You can read about the travel experience of Tenerife here. Although I've already mentioned some of the outcomes, I thought I'd also give a more executive summary of the stuff I found most interesting.

The theme of this conference was the low surface brightness universe - the faint fuzzy stuff. It covered the whole range of such features, focusing on galaxies but also including zodiacal light, the challenges of observing (including a very nice talk from an "amateur") and data processing, philosophy of science issues and reproducibility (it's tricky to get really deep images, and sometimes they give remarkably different results), and was just generally awesome. So here are my highlights.


Mike Disney's introductory talk 

Mike is essentially the conference godfather, who predicted the existence of large numbers of low surface brightness galaxies way back when. He began by noting just how vicious the field can be - and he's right. The conference in Cardiff back in 2007 was the first I ever went to, and by far the most bitter and acrimonious. I'm not sure why it went down that way, but this one didn't. People said plenty of controversial things (especially Mike !) without ever seeming like they were about to start beating each other up.

Mike's major points were that there should be a large population of hitherto unseen faint galaxies, though he tried earnestly to present arguments both in favour and against this. While everyone agrees that galaxies certainly get a lot more interesting the deeper you look (in particular elliptical galaxies tend to look much more interesting), it's more controversial as to how many brand new galaxies this will pick up (more on that later). I've covered some of these arguments before, but I'll be revising some of them thanks to this conference.

One of the issues I don't think people quite got nailed down was the significance of the low surface brightness galaxies. Many good arguments were presented that such hidden galaxies cannot contribute much to the total amount of starlight. This is probably true, but misses the point that such galaxies could still be dynamically massive and dark matter dominated. So perhaps both sides are right, depending on whether one thinks of stars or dark matter as being more important.

Mike's other controversial point didn't get so much attention, probably because the conference wasn't really focused on it : the galaxies seen in the spectacular Hubble Deep Field, he says, cannot possibly be the progenitors of today's galaxies because they're too bright. I'm skeptical, but there wasn't time to go into this much.


Johan Knapen's data talk

This was essentially a philosophy of science talk from the perspective of the sheer data size that's coming our way very soon. This is a very real and serious challenge, with the SKA expected to produce exobytes of data per second. While Mike raised the point about defining the scientific method being very difficult, Johan took this a but further. He listed four possible paradigms of science that have changed over time :
  1. Experiment-driven, as in the days of Newton and Gallileo
  2. Theory-driven, as in Einstein and other analytic theoreticians
  3. Numerical simulations
  4. Data exploration
Other suggested that the fifth paradigm could be A.I. while the sixth would be letting Facebook do everything.

Johan's point (if I remember correctly) was that astronomy was going to have to move away from the traditional hypothesis-testing method we learn in schools and towards this "fourth paradigm" of being a data-driven approach. He's not wrong, but those who I talked to seemed to agree that this already the case - and maybe always has been. I've made the point before at length : there's more than one right way of doing science, and the data usually tell you something very interesting but utterly unrelated to what you were interested in. I mean this very literally. And I believe it was Simon Driver who, in discussions afterwards, said that in experiment proposals it should be absolutely legitimate to describe which area of parameter space you wanted to explore and why it was new, without needing to say exactly what you expect to find there. With this I fully agree. It's largely a waste of time describing observational results before you've got them.


Mohammad Akhlaghi's reproducibility talk

Mohammad doesn't like the fact that papers often contain highly vague instructions as to how experiments were carried out and how data was analysed. He's trying to tackle the latter issue by developing a package that makes it very easy to document the full details of the software used, by including the exact software name, version, and all its associated dependencies (libraries etc.), in a way that makes it simple to include in a paper. It'll also let you automatically update any numbers if you change the software without having to redo the calculations or edit the paper yourself, and it doesn't require any special software modules to install : the point is that the user has full control over what packages they use.

I was a bit skeptical listening to this, being acutely aware that many aspects are entirely subjective, but I came around to it afterwards. You'll have to take care that if you make changes, your new numbers are still consistent with your original conclusions. And I'm a bit doubtful that there are that many cases where changing software actually changes the answer. But the basic idea that papers should be as reproducible as possible is something I can definitely get behind - provided we remember that objective, repeatable measurements can still be absolutely wrong. Reproducibility means you can find the errors, not say, "my method is objective and therefore objectively correct", which is an easy trap to fall in to.


Thomas Sedgwick's hunt for dark galaxies with supernovae

This was one of the most interesting and novel methods proposed for finding very faint galaxies. As long as the galaxy is forming some stars, it will have a few supernovae, and these can be detected. This is not something I would ever have thought possible - even though supernovae surveys are now decades old, I just don't think of stochastically exploding stars as something you can do a survey of. But you can. And you can even work out how many galaxies your survey implies, given some very reasonable, justified assumptions about the survey completeness and star formation rate.

Interestingly, it turns out that these corrections imply a galaxy distribution that's fully compatible with the standard model : that there are indeed large numbers of very faint galaxies out there, just as models have predicted but observations failed to find. This is a really cool result, but it relies on an enormously large statistical extrapolation, so it's probably safe to assume the problem isn't solved just yet (and kudos to the speaker for saying as much).

A somewhat similar talk was given by Raja Guhathakurta on looking for faint galaxies by searching for their globular clusters. The difference here is it should be possible to get much better completeness of the sample. We're planning to do a search for such features for the Virgo clouds I've been working on.


Nushkia Chamba's new definition of the size of galaxies

Nushkia knew me as "that guy with the hilarious blog", which absolutely made my day. She's come up with a new parameter for the size of galaxies, which is extremely interesting. If correct, it'll dramatically change how we think about ultra diffuse galaxies. But she asked that this not go on twitter, and I assume that includes other social media so I'll say no more about it. Expect to hear a lot more when it's published.


Daniel Prole's talk on the abundance of UDGs in the field

Are ultra diffuse galaxies a very common galaxy component that until recently went largely undetected, or are they just a smattering of exotic objects ? Several people made the point that UDGs were already know but it's their abundance in new surveys that's got people excited about them again. Daniel (who has the same PhD supervisor as I did) is attempting to estimate how abundant they are in the field, which is much harder than in clusters and has a far large volume. He's got a number, but rather surprisingly hasn't compared this to other numbers so it doesn't mean much yet. I would have thought there are already numbers for more typical galaxies, but perhaps getting a fair comparison (e.g. correcting for survey biases) is harder than you might think.


Pavel Mancera-Piña's talk on the dynamics of UDGs

I was so glad to see this talk. You may remember that I've commented several times on the weird line widths of UDGs with HI detections, which tend to be much less than expected (scroll to "things are getting weird" in that link for a plot). I emailed a couple of people about it - I got a cautious response from one and nothing from another. I've shown several people and they all think it's interesting, but I never have time to work on this myself. Thankfully Pavel does, and he's done a much better job than me of demonstrating that this weird result probably isn't due to observational constraints : these galaxies do seem to be weird. While it looks unlikely that certain famous candidates are not actually galaxies without dark matter, some of these UDGs might soon resurrect this possibility.

My one major concern, which I think should be relatively easy to address, is survey incompleteness. At any given mass, galaxies of low line width are easier to detect. If galaxies are rotating discs, it so happens that means we'll preferentially detect ones which are close to face-on from our perspective, which makes it hard to estimate their true line width (details here). So it might be that the survey is biased towards nearly face-on galaxies - and because they're so damn faint, it's hard to measure their inclination angle directly. In principle one could test this by calculating the line width they'd need to escape detection and the corresponding inclination angle required to reach this. However, I very much doubt this will explain all the objects. Many of those galaxies with reasonably clear detections look quite convincingly close to edge-on, implying a negligible velocity correction.


Freeke van de Vort's talk on simulations the circumgalactic medium

Freeke has done some spectacular simulations of the gas structures around galaxies. Normally I think of this as probably very diffuse, fluffy stuff, interesting but not particularly photogenic. But Freeke's simulations really are spectacular - they look like the sort of thing you'd get if you told a Marvel CGI artist to "make some pretty gas - really go nuts with this". She notes that the simulations aren't yet converged. While they get the major galactic structures right, increasing the resolution keeps changing the results for the CGM. She's also found some gas clouds without stars or dark matter, but of course the resolution dependence makes the significance of this hard to assess.


Honourable mentions

There are too many to mention properly but I can't avoid a few others :

  • Eva Grebel made the point that there are a few red, isolated UDGs known in the field, while several people (but particularly Anna Ferré-Mateu) noted that at least some UDGs may indeed be giants even if they aren't the majority. Most people seemed happy with the notion that there may be several different ways to make a UDG. 
  • The Dragonfly team defended their data reduction procedures in the face of their failure to detect the double arms of NGC 5907, although no-one seems to know what happened. Another talk showed us that the double arms had been detected independently so they're almost certainly real.
  • Gaspar Galaz said that there's a large linear stream extending from Malin 1, which is just weird. How you get a linear stream intersecting a stellar disc, I just don't know.
  • Everyone agreed that galaxies look much nicer with deeper imaging but that it's jolly hard to do. 
  • Bärbel Koribalski gave a very nice overview of extended optically dark gas features. Very nice to know that it's not just me working on this !
  • Sarah Pearson showed how we'll soon have detections of large numbers of globular cluster streams around galaxies. I don't think of streams in statistical terms, but this could be an interesting way to constrain the behaviour of galaxies and their dark matter content.
  • Anna Saburova, a collaborator of mine, sounded like she was about to kill everyone (it's the Russian accent) but described how difficult it is to explain giant low surface brightness discs. They most likely have different formation mechanisms - some by catastrophic collisions, others through slow accretion.
  • Sebastiano Cantalupo explained dark galaxies at high redshift, which I was surprised to hear may be not all that dissimilar to the candidates at low redshift, with similar masses and dynamics. Definitely one I need to read up on more as I'd assumed the high redshift objects would be very different.
Which just about wraps it up. Plenty of background reading to do until the conference proceedings are released.

Wednesday 17 July 2019

Tales from the radio

The nice thing about atomic hydrogen is that we think we understand the basic emission physics well enough that we can use it probe other things, like galaxy environments. All we have to do is measure the brightness and bam! we also get the mass, and can do more complex stuff like work out how galaxies are interacting and what's going on in clusters.

21cm emission is pretty unusual in that regard in radio astronomy. At lower frequencies, such as those described here, the emission mechanism is far less clear - and depends on all kinds of nasty relativistic effects which are not nice. Which either makes this stuff really interesting or obnoxiously difficult, depending on your point of view.

Here the authors describe a gigantic low-frequency radio source in a nearby cluster, about 140 Mpc distant and 900 kpc long, with the peak of the emission terminating at the galaxy IC 711. They have a nice shiny new map taken with India's Giant Metre-wave Radio Telescope which gives excellent resolution. Until recently, I'd heard that the GMRT worked quite well but only as long as you got one of the locals to do the data reduction for you, but lately I'm told it's producing some really good data even for external users. Certainly the map they show looks very impressive, so congratulations to them on that.

The feature itself is yuuuuge. It's extremely linear and well-collimated, though it has two distinct breaks where the angle changes very sharply, in one case by a neat right-angle. And it's not the only such feature known in this otherwise nondescript cluster. It also hosts another (much smaller) head-tail structured radio source and a double-lobed broader feature. Radio waves may be very low energy, but that doesn't mean they don't reveal some really weird stuff that's going on.

So what in the world is going on with this enormous feature ? It's not at all clear. It's presumably related to gas lost by the galaxy during its passage through the cluster. But given the length of stream and velocity dispersion of the galaxies, that indicates an age of about a billion years. Could it survive and remain linear in a cluster for that long ? It's unclear. The emission mechanism here is thought to be synchrotron radiation, but that should only last about 100 million years. They suggest that what's kept it going is the cosmic microwave background. Crazily, when the stream first started to form, the CMB would have been significantly brighter than it is today - it's THAT big.

What keeps it so narrow ? They suggest pressure confinement by the intracluster medium, which allows calculation of the ICM pressure. But what that implies for the ICM pressure throughout the cluster I'm not sure; it seems like an awfully nice coincidence would be necessary for it to have such a uniform thickness. Likewise it's not clear what's causing the two kinks. There's no way the galaxy's orbit could have changed so suddenly, but if it was due to local overdensities in the ICM, then the thickness of the tail should vary a lot more. Making sharp kinks with tidal fields is possible but difficult. And even the whole linear shape of the tail is a bit weird - you'd expect it to be at least a little bit curved as the galaxy moves through such a large orbit.

So what's going on ? No idea. The paper appears to be still at the submission stage after nearly three years, which is a bit concerning... though that could be because they occasionally use some rather strong phrases about other people's work being wrong. It definitely looks like one to keep an eye on.

GMRT observations of IC 711 -- The longest head-tail radio galaxy known

We present low-frequency, GMRT observations at 240, 610 and 1300 MHz of IC~711, a narrow angle tail (NAT) radio galaxy. The total angular extent of the radio emission, $\sim 22$ arcmin, corresponds to a projected linear size of $\sim 900$ kpc, making it the longest among the known head-tail radio galaxies.

Arecibo 2030

I no longer have any insider access to what's going on at Arecibo, but least this white paper proposing upgrades for the next ten years is fairly optimistic. It's not as ambitious as I might have expected, but after the hurricane damage that's not so surprising. I got an email about this and I may have contributed if I hadn't been at a conference with a 9am-7pm schedule and remote observations starting at 3:45am. Probably wouldn't have made a difference anyway, most of this is news to me.

The most immediate thing is how awesomely clean the dish is looking these days. I'd completely missed it, but they got $14 million for repair funding which includes the long-awaited task of cleaning the dish (there are better images on Google). But as they say, making the dish look nice doesn't mean it's good as new. Although the dish survived hurricane Maria, not least of the damage including a direct strike from a falling line feed, it's significantly warped. This means it still needs significant repair work to bring it back to full sensitivity, although (if I read this correctly) this is already funded through a $14 million hurricane relief fund.

Beyond restoration, the biggest improvement for the next decade should be ALPACA (Advanced L-band Phased Array Camera for Arecibo). This is a successor to the infamous ALFA instrument, but instead of having a measly 7 pixels as ALFA does, it will have the equivalent of 40. This improvement is possible because this uses a totally different sensor technology, and if and when it becomes operational, it will dramatically increase survey sensitivity and speed. Some people have described this as Arecibo's contribution to the SKA pathfinder telescopes being developed elsewhere.

Together with new receivers go the development of new back-end equipment that will give a much greater bandwidth. So not only will the telescope be able to cover more of the sky at once, but it will also be able to detect a greater range of frequencies at higher resolution all at the same time. That means more surveys can be combined, greatly increasing the discovery power per unit survey time. This will also make it possible, they say, to implement new ways to reduce the effects of radio frequency interference, though I can't say I understand how that works.

Finally, looking further ahead, the prospect of replacing the dish was often mentioned. In principle it should be possible to create a surface of the precision needed to reach frequencies as high as 30 GHz, compared to today's 10 GHz, so greatly increasing what the telescope can detect. That's a much bigger job, so the next decade will only involve a feasibility study.of this - but they're still projecting that the funding required would only fall into the "medium" category by the NSF standards. All in all - as ever - the level of funding required to fund Arecibo at a sensible level is very modest, and it remains the case that not funding it would be a damn travesty.

Astro2020 Activities and Projects White Paper: Arecibo Observatory in the Next Decade

The white paper discusses Arecibo Observatory's plan for facility improvements and activities over the next decade. The facility improvements include: (a) improving the telescope surface, pointing and focusing to achieve superb performance up to ~12.5 GHz; (b) equip the telescope with ultrawide-band feeds; (c) upgrade the instrumentation with a 4 GHz bandwidth high dynamic range digital link and a universal backend and (d) augment the VLBI facility by integrating the 12m telescope for phase referencing.

Tuesday 16 July 2019

An extended UV disc of NGC 300 ?

About a decade ago (and how scary that I can use that phrase !) extended UV (XUV) discs were the sexy topic in extragalactic astronomy. While the boring old optical data obviously traces light from stars, the high-energy ultraviolet emission only traces the emission from hot, bright stars. Since these only live for a few tens of millions of years, it's a good tracer of ongoing star formation. And since the stellar discs of galaxies are, though they don't have a definite edge, nevertheless reasonably well-defined structures, it came as quite a surprise to find some had very extended UV emission well beyond the stellar disc. How could stars be forming in such a low-density environment, and why weren't there any older stars out there as well ?

For a few years XUV discs provoked a host of interesting questions. Was this UV emission really tracing young stars ? Did this indicate very rapid stellar migration, or were stars actually forming way out there ? Did they form with the same mass distribution as other stars or does the initial mass function (IMF) vary depending on environment ? But somehow interest in the field seemed to die and it never provoked much beyond the initial burst of interest - I don't know why, or if these questions were ever satisfactorily answered.

This paper feels a bit like a blast from the past, or rather a small gust, presenting evidence of another XUV disc around the small spiral galaxy NGC 300. But it doesn't really tackle any of the major questions, and I'm rather skeptical of their main result anyway.

If you simply look at the optical and UV images of NGC 300, there's no obvious major difference between the two (but see below). Instead, to find the possible XUV emission they identify point sources in the GALEX data in two annuli centred on the disc : one close, one further away as their control background field. By taking a colour-magnitude diagram of the objects in the background field, they determine how to remove background objects in their target field.

The problem is that the distribution of both background and foreground candidate objects looks pretty well uniform across both their fields - there's no obvious overdensity associated with the galaxy. Granted, their CMDs do look different, but given that NGC 300 is a big, bright source, I'd expect to see it very clearly in the distribution of point sources. And they don't show the distribution of their candidate sources after removing the background ones, so if the overdensity is weak there's no way to see it.

I got irritated that they didn't present a comparison of the UV and optical, so here's a very quick-and-dirty stab at it - optical (DSS) on the left and GALEX UV on the right, aligned to the same scale :

The colour scheme may be ugly but it's really good for giving high contrast.

Don't be fooled by the similar colour scheme - the sensitivity of each image is probably very different, but it's more work that this post warrants for me to try and do it properly. But there are two reasonably clear points. First, the main disc is of equal size in both images. Second, there are hints of a more extended, irregular component in the UV disc. I tried smoothing the optical image to increase sensitivity, but I didn't find any sign of them. And these are only hints anyway - intriguing, but not compelling. Low surface brightness features can be tricksy.

Slightly more convincing than their contour plots (which don't tell you much since they don't compare the UV and optical) are their surface brightness profiles. These show that the shape of the profile gets flatter (i.e. more extended) at shorter wavelengths going from IR to optical to NUV and FUV. There's a lot of scatter there though so I'm not entirely persuaded. And at much longer radio wavelengths the emission is much flatter again, but that traces the gas rather than the stars so that's not terribly surprising.

So does this galaxy have XUV or not, and if so, why ? Sadly their only comment is that the low density of UV emission beyond the main stellar disc may be due to the lower density of gas, but this isn't very informative. They do a lot of other work looking for correlations between other components, including trying to determine the age of the UV regions, but they don't really exploit this very much. It would be interesting to see if there's a radial age dependence, but they don't plot this.

If the faint streaks visible in the UV image are real, then they don't seem to bear much resemblance to the optical structures. Nor is there any obvious relation to the hydrogen data from radio telescopes, but I had trouble plotting contours of the data cube (not sure why, and I'm not interested enough to investigate). The most I'd be willing to venture is to say there are hints that the UV is more extended than the optical and has a different morphology in the outskirts. But I really don't think we can say any more than that. I would not be so confident as the authors as claiming an XUV detection, though at least someone is working on this stuff again.

Tracing the outer disk of NGC 300: An ultraviolet view

We present an ultra-violet (UV) study of the galaxy NGC~300 using GALEX far-UV (FUV) and near-UV (NUV) observations. We studied the nature of UV emission in the galaxy and correlated it with optical, H~I and mid-infrared (3.6 $μ$m) wavelengths. Our study identified extended features in the outer disk, with the UV disk extending up to radius 12 kpc ($>$2R$_{25})$.

Thou Shalt Make An Accessible Data Archive

There's very little I can add to this so I'm just going to quote it. We all know data archives are important, and here's the data to prove it. By all means give people a proprietary period on their data, but all major facilities should make every effort to make as much data as possible public as soon as possible. As a standard, I suggest making science-level data (as opposed to the raw, unprocessed data) public upon first publication.

A significant challenge for the very near future is going to be sheer data volume, with the SKA, crazily, predicted to be producing exobytes of data per second. That's terrifying. So there will be cases where the data access is going to have to be limited, but that doesn't mean we can't minimise this. And already simulations tend to produce vast amounts of data, which for a small institute it's simply wildly impractical to expect them to be able to host it publically (I've deleted terabytes of my own data because it's just too friggin' large). But I'm reminded of a nice talk I saw last week where the speaker described a procedure to easily include the full details of the software used, including the exact versions of every library and dependency. A move towards more of this kind of approach, where a paper describes - as we're taught in high school to do - exactly how to reproduce a result, would be a very good thing. Alternatively, there could journals specialising in contemporary methods of analysis, so that one could simply cite a paper describing the method and then just add very brief notes about any modifications used. The point is that if you can't provide the actual data, at least provide the tools to exactly reproduce it.
We present a bibliographic analysis of Chandra, Hubble, and Spitzer publications. We find (a) archival data are used in >60% of the publication output and (b) archives for these missions enable a much broader set of institutions and countries to scientifically use data from these missions. Specifically, we find that authors from institutions that have published few papers from a given mission publish 2/3 archival publications, while those with many publications typically have 1/3 archival publications. We also show that countries with lower GDP per capita overwhelmingly produce archival publications, while countries with higher GDP per capital produce guest observer and archival publications in equal amounts. We argue that robust archives are thus not only critical for the scientific productivity of mission data, but also the scientific accessibility of mission data. We argue that the astronomical community should support archives to maximize the overall scientific societal impact of astronomy, and represent an excellent investment in astronomy's future.
The scientific accessibility of astronomical data is critical to maintain a rich, flourishing, and growing discourse in astronomy. If the astronomy conversation is dominated by only a few voices, institutions, or countries, the entire scientific process, where old ideas are constantly challenged and new ideas are constantly proposed, can wither and die. Further, by expanding the community working on these missions and in astronomy we sow the seeds for the future success of the discipline. We note that engagement of the lay community through public outreach and citizen science is also critical to the success of astronomy and is similarly enhanced by access to archival data, but in this work we explicitly address scientific engagement with astronomical data. 

Robust Archives Maximize Scientific Accessibility

We present a bibliographic analysis of Chandra, Hubble, and Spitzer publications. We find (a) archival data are used in >60% of the publication output and (b) archives for these missions enable a much broader set of institutions and countries to scientifically use data from these missions.

Monday 15 July 2019

One of our arms is missing, send help

More follow-up from last week's low surface brightness conference. Actually this paper came out just before, but I didn't have time to do a write-up. And as it happens that's probably for the best.

Galaxies aren't very similar to cats or tables or bananas, though I expect you knew that already. You probably also know that those objects do look a bit different depending on what wavelength you use - a banana, for instance, doesn't look that much different from the background to a thermal camera, whereas a cat is much more obvious. But in all cases, the basic shape doesn't really change no matter what you do*. A cat doesn't start looking like a house if you take a long enough exposure. A table isn't revealed to have very faint but enormous tentacles by looking in the infra-red.

*Unless you go to really extreme values, where resolution and sensitivity effects dominate. E.g. a 1m wavelength radio wave wouldn't be a good way to measure a 0.5m cat.

For galaxies it's a different story altogether. The more thinly spread the stars, the harder they are to detect (and the same goes for the gas). And galaxies are pretty robust things but they're not invulnerable - interactions with other galaxies can distort them or tear bits off, often causing long, very very faint stellar streams. Although such features tend to only comprise a tiny fraction of the galaxy's total stellar mass, they can give important clues as to what's happened to that galaxy in the past. Which means that much of the really exciting stuff that the galaxy's done can only be revealed by taking really deep imaging.

Take, for example, the edge-on galaxy NGC 5907. In a short exposure it looks a bit like this :


Which is very nice but entirely normal. But in a much deeper image you see this :

There are various reasons the colours are different but they're not important here.
Much more interesting ! Except the authors of the study below have done their own deep imaging, and they find something different again :


This is from the same team who led the charge to the current renaissance in low surface brightness galaxies that's got everyone interested again. It's a rather easier feature to intuitively understand, at least considering only the main, brightest parts - it certainly looks like what you'd expect from a tidally-disrupted dwarf galaxy. The authors show the results of a simple simulation that reproduces this result very well, and even identify the possible main remnant of the progenitor satellite.

Except... why does it look so darn different from the previous image. Here's where they come unstuck, which needs to be quoted at length :
We now turn to the most puzzling aspect of our study. The morphology of the stream in our data is a good match to the shallower discovery image of Shang et al. (1998), and also to a meta-analysis of NGC 5907 images by Lang, Hogg, & Scholkopf (2014) and to Subaru imaging (see Laine et al. 2016, and S. Laine, priv. communication). However, it is qualitatively different from that reported by M08. The most striking difference is that we do not confirm the presence of the second loop...  
We cannot definitively determine the cause of these discrepancies with M08, but a likely explanation lies in the image processing procedures that were applied to the data. The M08 data were obtained and processed by an experienced amateur astronomer. Amateurs have played an important role in this field as they convincingly demonstrated the power of small telescopes for low surface brightness imaging (see Martínez Delgado et al. 2010). However, the methods that are used by the amateur community typically do not allow for quantitative analysis, as their image processing is generally optimized for aesthetic qualities rather than preserving the linearity and noise properties of the data.
And that's it. But this just isn't good enough. It's fine to quote the sources which also don't show this second loop, but at the conference we were shown a whole panel of independent images which did also show the second loop. And processing for "aesthetic qualities" does not mean "photoshopped to look nice" - some amateurs are really bridging the gap between professionals and hobbyists. It seems fantastically unlikely to me that that the second loop, a feature which is so clear and distinct in the middle image above, is some kind of artifact. You'd need one heck of a bizarre processing job to get something so coherent if it wasn't real, especially one that's been re-detected observationally by different observers with different telescopes.

My impression is that this paper should have been a full paper, not a letter. Sure, it's a newsworthy and eye-catching result. But with all that work gone into it, why not write a more detailed report ? There's no reason to rush that I can see. I had the feeling that this result has only made the community more skeptical of the authors' growing number of other controversial results. On a more positive note, interest in the faintest features is definitely growing, so expect more such dramatic discoveries and similar controversies ahead.

Dragonfly imaging of the galaxy NGC5907: a revised view of the iconic stellar stream

In 2008 it was reported that the stellar stream of the edge-on spiral NGC5907 loops twice around the galaxy, enveloping it in a giant corkscrew-like structure. Here we present imaging of this iconic object with the Dragonfly Telephoto Array, reaching $1σ$ surface brightness levels of $μ_g\approx 30.5$ mag/arcsec$^2$ on arcminute scales.

This cow is probably small, not far away

I've written quite a bit about that famous galaxy apparently devoid of dark matter, of which you can read a pretty complete summary here. Some of the criticism levelled against the central claim, such as the statistical evidence from the globular cluster velocity dispersion, hasn't stood up to scrutiny. The main challenge has been to establish the distance : if the galaxy is far away, then it's large and weird, but if it's closer, then it's small and quite normal.

Thus far I've been more persuaded by the "large and weird" camp. But after an excellent conference last week (of which more in PotC soon !), and this latest paper, I'm slowly switching my allegiance. I'm no expert on the distance measurement techniques, but it looks to me like the authors here directly address the problems van Dokkum previously listed.

There are two here arguments that I, as a non-expert in distance measurements, find compelling. First they say that there's evidence for two different galaxy groups at different distances - one at 13 Mpc, which the Trujillo camp have espoused and would rectify all the weird anomalies of the galaxy, and one at 19 Mpc, which is close to the van Dokkum estimate and would make everything weird. That their estimates do find evidence of galaxies at the higher distances in this region makes it far more plausible that it's not as simple as someone having just made a mistake - it's much easier to believe that people have simply been confused because this little patch of the Universe is in fact quite confusing. And it establishes that there's nothing funny going on that makes it impossible for one method to find a larger distance value here (for whatever reason).

Secondly, they point out that the velocities of the galaxies don't tell you much about their distance information, especially as things in groups move around quite a bit. That is, at 13 Mpc we expect galaxies to be redshifted to an equivalent of 900 km/s, whereas at 19 Mpc it would be more like 1300 km/s. But for galaxies to have peculiar velocities (that is, deviations from the expected value assuming uniform Hubble expansion) of 200-300 km/s in groups is nothing extraordinary at all. So the overall picture of the anomalous galaxies actually being closer, which also makes them rather typical objects in every other sense, looks completely self-consistent.

I would point out though that when the authors say :
In fact, Blakeslee & Cantiello (2018) warn about the use of such a calibration in a range where it has not been explored...
Then this is very misleading. They are trying to refute a problem with the larger distance estimate, but the citation they give actually supports the larger value ! To omit this is not really very fair. And of course we should hear van Dokkum's inevitable response - I doubt we can quite call the matter settled just yet. But if I had to bet, I'd be switching my money to the "small and close" interpretation.

The TRGB distance to the second galaxy "missing dark matter". Evidence for two groups of galaxies at 13.5 and 19 Mpc in the line of sight of NGC1052

A second galaxy ``missing dark matter" (NGC1052-DF4) has been recently reported. Here we show, using the location of the Tip of the Red Giant Branch (TRGB), that the distance to this galaxy is 14.2+-0.7 Mpc. This locates the galaxy 6 Mpc closer than previously determined.

Monday 1 July 2019

Where do ultra diffuse galaxies live ?

Another paper on ultra diffuse galaxies, those weird faint smudgy things which are about as extended as the Milky Way but a thousand times fainter. This one looks at a new sample from the ALFALFA survey of atomic gas. That's important because measuring the gas tells us about a) the star formation activity of a galaxy and b) the kinematics of the galaxy, i.e. how fast it's rotating and thus its total mass.

The majority view seems to be leaning heavily towards a low total mass for these objects. Discovering a huge population of massive "failed" galaxies - in the sense that they suck at forming stars - would be really interesting,but it just wouldn't fit with everything else. Or so the legend goes.

I was a bit disappointed that this paper doesn't really tackle this issue much at all. While star formation is interesting in its own right, it wouldn't be at all surprising to find a bunch of dwarf galaxies with low star formation rates. It would be interesting to learn how such galaxies get to become so very extended, but that's just not as fun as failed giant galaxies. Personally I'm a little bit skeptical about ruling out the failed giants hypothesis just yet, but we'll see.

This paper doesn't really tackle the star formation issue much either. Instead, it takes a more oblique analysis by looking at the environment where gas-rich UDGs are found. That's a clever line of attack. One idea has been that dwarf galaxies in clusters could become more extended through all the tidal interactions with other cluster members. Obviously that mechanism doesn't work for isolated galaxies, which is one of the reasons this made this team's previous paper so interesting. Here they extend their previous analysis to UDGs which are not isolated, looking at how their properties change in different environments.

Or rather, doesn't. Their conclusion seems to be that environment doesn't make any different to the properties of gas-rich UDGs. That's not to say that gas-poor UDGs might not be found preferentially in different places, but if they're gas rich, then they seem equally happy to be in denser environments as in isolation. Their colours and stellar masses don't seem to vary much, though it does seem that gas-rich UDGs avoid the very densest regions but that's true of gas-rich galaxies in general. Annoyingly, though they do have line width measurements (i.e. how fast the galaxies are rotating), they only comment that these are rather low, and don't describe if or how they vary with environment.

What does this mean for theories of UDG formation ? Here too I felt the paper could be more bolshy. They say, "This environmentally independent behaviour is consistent with a formation scenario wherein UDGs evolve slowly because of low star formation efficiency and do not require an interaction with a cluster to become diffuse." Which is fine, but what specific models does it favour and which does it disfavour ? I commend the authors for sticking so ruthlessly to the facts, but I wanna know what they think this means. It feels like they've got a fantastic data set but then chosen to ignore all the most interesting stuff.

The environment of HI-bearing ultra diffuse galaxies in the ALFALFA survey

We explore the environment of 252 HI-bearing Ultra Diffuse Galaxies (HUDs) from the 100% ALFALFA survey catalog in an attempt to constrain their formation mechanism. We select sources from ALFALFA with surface brightnesses, magnitudes, and radii consistent with other samples of Ultra Diffuse Galaxies (UDGs), without restrictions on their isolation or environment, more than doubling the previously reported ALFALFA sample.

Back from the grave ?

I'd thought that the controversy over NGC 1052-DF2 and DF4 was at least partly settled by now, but this paper would have you believe ot...