Sister blog of Physicists of the Caribbean. Shorter, more focused posts specialising in astronomy and data visualisation.

Wednesday 28 September 2016

All-sky HI data made even prettier


More fun with all-sky HI data. The radial distance here is velocity, not real distance, hence the weird-looking structures. Colour is scaled based on the flux range in each velocity channel, so you see a lot more structure than in the earlier versions.

Might try another attempt at converting this to true distance, but my spare time is pretty close to zero for the next couple of weeks.

Friday 23 September 2016

Animating all-sky hydrogen data


Quick animation test. Needs more frames really. There's nothing new here (detailed explanation here). But last time it involved a laborious manual procedure. Now I've got a pipeline to process data of arbitrary coordinate systems (Cartesian is for wimps !), and it seems to be working. Sort of semi-functional in the realtime view but that needs a lot more work. Turns out you can get away with quite low-poly spheres without any noticeable distortion, which is good.

That's what it's supposed to look like


After it went horribly wrong last time, here's what it's supposed to look like. All-sky HI data visualised in 3D, using velocity for distance.

Ooops


Coordinate mix-up turned my spherical HI data into some sort of minimalist exploding peach...

Wednesday 21 September 2016

Proving a negative is perfectly possible

Of course you can prove a negative. In one sense this can be the easiest thing in the world : your theory predicts something which doesn't happen ? You've just proved that theory doesn't work.

Let us say you have 1000 boxes, and your hypothesis is that there is a valuable diamond in one of the boxes. You can open a box, look into it, and reliably determine whether there is a diamond in that box or not. Let us say you have opened two boxes, and found no diamond. The probability of this result if there is no diamond at all (the null hypothesis) is 1. The probability of this result if there is one diamond is 0.998, so the likelihood ratio is 0.998 - the 2 trials are very weak evidence that there is no diamond...

SETI researchers argue (persuasively, I think), that they haven't opened very many of the boxes yet. Perhaps one day, after a much more exhaustive tour of the search space, it will be less than 1, and we can start meaningfully wondering about the eerie silence. Until then, the search continues and no one should be too discouraged.

I agree regarding civilizations comparable or less technologically advanced than our own, but the Fermi paradox remains.

https://disownedsky.blogspot.com/2013/12/when-is-absence-of-evidence-evidence-of.html

Monday 12 September 2016

Schools Out For Summer

I spent this week tutoring a group of students the difference between a radio telescope and a radio...
http://summer2016.asu.cas.cz/program.html

The introductory talk was a lot of fun :


... the software installation, not so much.

Non-standard software installation. It's like wrestling a bear. Only, this particular bear is quite cross. He's just had to fill in a lengthy tax return while having Justin Beiber scream into his ear and he's lost all his pens. He's also been covered in grease, set on fire, attacked by a vicious horde of rabid squirrels and thrown on to a trampoline. He is not a very happy bear at all.

Friday 9 September 2016

Problems with citations

Of the literature Sanz-Martín surveyed, she found that just under half of publications cited papers incorrectly. Some papers misinterpreted their sources, while others cited irrelevant papers or referred to papers selectively in order to fit the author's argument. Sanz-Martín had to sift through hundreds of papers in minute detail. It was also a challenge on a personal level, she says, because "criticising the others' work is very difficult". For completeness, she and her colleagues also analysed papers that they themselves had written on jellyfish. They found exactly the same citation mistakes.

Oddly enough a reluctance to criticise is not something I've witnessed much.

Mills's paper was a review, in which she questioned whether there was a global trend in jellification. Her answer was not a definitive "yes" or "no", so she framed her title as a question: "Are populations increasing globally in response to changing ocean conditions?" Many of the scientists who went on to cite her work seem to have assumed that her answer to that question was a firm "yes". "It didn't occur to me that by posing it as a question, and inviting people to make their own conclusions, that they wouldn't read it carefully," says Mills.

The sheer scale of the literature that scientists have to get to grips with could be one cause, Sanz-Martín says. "It is very difficult to handle all this information and also to be balanced," she says.

Furthermore, scientists need to compete for research funding, and this puts pressure on them to be bolder in their claims than they might otherwise be. "You need to have a very good reason to do research and to gain funding," Sanz-Martín says. "You need to get funding and you need to publish a certain amount of papers every year."
http://www.bbc.com/earth/story/20160905-are-swarms-of-jellyfish-taking-over-the-ocean

Thursday 1 September 2016

Please don't patent peer review

This does seem like such an incredibly corporate thing to do.

The description of the invention is lengthy, but is essentially a description of the process of peer review, but on a computer. For example, it includes a detailed discussion of setting up user accounts, requiring new users to pass a CAPTCHA test, checking to see if the new user’s email address is already associated with an account, receiving submissions, reviewing submissions, sending submissions back for corrections, etc, etc, etc.

Patenting a method of scientific review is just flat-out immoral. Don't do that. Just don't.

The patent departs slightly from typical peer review in its discussion of what it calls a “waterfall process.” This is “the transfer of submitted articles from one journal to another journal.” In other words, authors who are rejected by one journal are given an opportunity to immediately submit somewhere else. The text of the patent suggests that Elsevier believed that this waterfall process was its novel contribution. But the waterfall idea was not new in 2012. The process had been written about since at least 2009 and is often referred to as “cascading review.”

That's not even a great practise anyway. First, it's hardly difficult to submit to another journal, it's not something you need a special method for. Second this is not necessarily a terrible practise, but it's not a great one either. It shouldn't be seen as normal to resubmit if a paper is rejected. A better approach would be to have a system for choosing the appropriate journal at the submission stage. And it probably would be a good thing to have more distinction between the journals and their subdivisions, in my opinion.
https://plus.google.com/+RhysTaylorRhysy/posts/LfJxdkdhcz2

https://www.eff.org/deeplinks/2016/08/stupid-patent-month-elsevier-patents-online-peer-review

Back from the grave ?

I'd thought that the controversy over NGC 1052-DF2 and DF4 was at least partly settled by now, but this paper would have you believe ot...