Sister blog of Physicists of the Caribbean. Shorter, more focused posts specialising in astronomy and data visualisation.

Thursday, 8 February 2018

Stealth in space : define the problem first !

It's OK to be sensitive

I'm prompted to write this by discussions on stealth in space, but really it could apply to absolutely anything. I don't have time to write this up more fully, so this'll have to do.

When we search for something, what do we mean by the "sensitivity" level of our survey ? There are three basic ideas everyone should be aware of.

1) Sensitivity. Any survey is going to have some theoretical hard limit. Astronomical surveys always have noise, political surveys always have errors. If a source is below your noise level you have no chance of detecting it; if your question was flawed there are answers you won't be able to obtain. You might be able to improve this by doing a longer integration or asking more people, but with the data you've actually got, you're always limited. Above this limit, you might be able to detect something. Here's where it gets more subtle and often complicated - it's a very bad idea to take a given "sensitivity limit" and apply this without thinking more deeply about what it means.

You're going to have some procedure for extracting the data that you're interested in, e.g. number of stars in a given region. This procedure, like the data itself, will have its own errors. You're going to find some sources which aren't really stars at all, and miss some thing which are real stars completely. In general, the closer a source (be that a star or anything else that's detected, even if it isn't real) comes to the noise level, the more problems which will result. In particular :

2) Reliability. Some of what you detect will be real, but some of it won't. Reliability is defined as the fraction of things you find which are really what you were looking for. If you go looking for elephants and find 100, but when you take a closer look at your photographs later on you realise than 25 of them were actually cardboard cut-outs that looked like elephants, then your survey is 75% reliable.
For the definition itself, the total number of elephants actually present is irrelevant. In practise, if you're got 9,00 real elephants and 100 fake elephants in a dense forest, this is going to be a lot harder than 9 real elephants and 1 fake elephant in an open field. However, reliability can be quantified relatively easily : you have to go back, examine each source more carefully and if necessary get better data for each one. There's usually some perfect test you can do to distinguish between an elephant and a stick or a very large horse. One should keep in mind that this will vary depending on the particular circumstances; an automatic elephant-finder might be 50% reliable on average, but 99% in open sandy deserts and 2% reliable in dark grey rocky canyons. So even reliability figures must be handled carefully.

3) Completeness. This refers to the fraction of real sources you're interested in that you actually detect. Say you survey a volume of space containing 100 stars and you detect 99 of them. Then your survey is 99% complete. Easy peasy, except that it isn't. In practise, you very rarely actually know how many sources are really present. Quantifying completeness can be much harder than quantifying reliability, because you can't measure what you haven't detected. You can make some approximations based on those things you do detect, and hope that the Universe isn't full of stuff you haven't accounted for, but you can't be certain.
Galaxies are a bit of a nicer example than stars because they're extended on the sky. Consider two galaxies of the same total brightness but with one 10 times bigger than the other. You might think that the larger one is easier to detect because it's bigger, but this is not necessarily so - its light will be much more spread out, so its emission will be everywhere closer to the theoretical sensitivity limit. Whether you detect it or not will depend very strongly indeed on your survey capabilities and your analysis methods. And at the other extreme, if the galaxy was much further away it could look so small you'd confuse it for a star.

What's that ? You say your fancy algorithm can overcome this ? You're wrong. These issues apply equally to humans and algorithms searching data. Now you can, to some extent, improve the quality of the data to improve your completeness and reliability. For example in astronomy it's far more subtle than just doing a deeper survey - different observing methods produce very different structures in the noise, which can sometimes create features which are literally impossible to distinguish from the noise without doing a second, independent observation - no amount of clever machine learning will ever get around that. So choosing a better type of survey or asking better questions can get you a much better result than just taking a longer exposure or asking more questions to find out what the answer is. But even these improvements have limits.

A good example is the recent claim of a drone which automatically detects sharks (http://www.bbc.com/news/av/world-australia-41640146/a-bird-s-eye-view-of-sharks), which apparently has a 92% reliability rate. The problem is that this tells you absolutely nothing (assuming the journalists used the term correctly) about its completeness ! It might be generating a catalogue of 100 shark-shaped objects, of which 92 turn out to be real sharks, but there could in principle be thousands of sharks it didn't spot at all. Of course that's very unlikely, but you get the point.

Completeness and reliability both vary depending on the type of thing you're trying to observe and the method you're using. For example, the drone might detect 92% of all Great Whites but miss 92% of all tiger sharks (for some reason). Or your survey might be great at detecting stars and other point sources but be miserable at finding extended sources. For any survey, the closer the characteristics of your target are to the theoretical limit, the more problems you'll have for both completeness and reliability. In short, the fact that something is theoretically detectable tells you very little at all about whether it will be detected in reality.

So for stealth in space, it's imperative to define very carefully what you mean by stealthy. Do you mean you want a Klingon battle cruiser that can sneak up and poke you in the backside before you notice it ? Or do you just want to hide a lump of coal in the next star system across where, hell, it's difficult enough to detect an entire planet ? What level of risk are you willing to accept that it won't be detected - or might be detected, but not actually flagged as an object of interest ? Because whatever survey is looking for your stealth ships is gonna have some level of completeness and reliability which will depend very strongly on the characteristics of what it's searching for. It might very well record photons from the ship, but that tells you nothing about whether anyone will actually notice it.

2 comments:

  1. Good write-up on why statistics are never straightforward.

    ReplyDelete
  2. ... at turns, I believe it's better to establish the terms of the survey itself, do the hard work - and treat your survey data as if it were being used to answer other questions than yours. Don't let the supposedly error-filled data collection protocols scare you off: if you're being as consistent as possible it's not an error, it's a limit .

    In software, there's a pattern called Observer, with a sister pattern called Subject or Observable. It might surprise you to see how often even highly competent programmers commingle the two. Moses Maimonides: "Teach thy tongue to say 'I do not know', and thou shalt progress."

    ReplyDelete

Giants in the deep

Here's a fun little paper  about hunting the gassiest galaxies in the Universe. I have to admit that FAST is delivering some very impres...