I'm still working on getting this into VR format, but as that's hitting a wall for the moment, here it is conventional format. These are the 30,087 galaxies from the complete ALFALFA catalogue, detected in neutral hydrogen but here showing their optical components. The images are to scale (with a few that are wildly inaccurate) but exaggerated in size by a factor 50. More details here.
Minor adjustments : I'm using ALFALFA's estimate of distance, which is more sophisticated than just using the redshift directly. For size I'm no longer using the Petrosian radius thingy from the SDSS as that's crap, but assuming a constant surface density for the hydrogen instead. Works much better, though it still fails from time to time so you'll see a few clipped images. That probably happens if the galaxy has much less gas than normal.
The VR format has hit a frustrating point. The standard version (below) is easy and fast to render. The VR one requires Cycles, which does this really stupid synchronising objects and loading images thing that's much, much slower than the actual rendering process. By default it takes about 6 minutes to prepare and then about three seconds to render. By some clever tricks I've got that down to 2 minutes. Normally I'd throw that on to my beefy work machine and let it chug away for a while, but that's not possible here. I need Blender 2.79 for this, as previous versions limited the number of image textures to 1024. Can't run that version on my work computer as 2.79 is compiled against an updated version of glibc which means I'd have to update the OS. Can't render in passes either as for some reason, GOD KNOWS WHY, images with transparent backgrounds don't render correctly in Cycles.
These things were sent to try us...
https://www.youtube.com/watch?v=qyaeGfMgqn0&feature=youtu.be
Sister blog of Physicists of the Caribbean. Shorter, more focused posts specialising in astronomy and data visualisation.
Thursday, 21 June 2018
Monday, 18 June 2018
Ram Pressure Stripping Made Slightly Easier Than Before
...Or Actually Very Easy Indeed If You Use These Handy Online Tools
This paper was in development for an outrageously long time and all the hard work was done by the lead author, who intends it to be his final paper as first author. I only came in for the last few years, to give you some idea of the timescale involved.
Ram pressure stripping is a process which affects galaxies moving through an external medium - usually the hot, thin gas in a cluster. The motion of the galaxy causes the external gas to pile up in front of it, generating a "ram pressure" which can be sufficient to remove gas from the galaxy. Unlike tidal encounters, the faster the galaxy moves, the stronger the effect. This makes it extremely important in clusters where galaxies tend to have extremely high velocities - great for ram pressure stripping, lousy for tidal encounters. The particularly high density of cluster gas means that ram pressure can become strong enough to completely remove the gas even from a giant galaxy, while leaving the stars essentially unaffected. It's thought to be the dominant process affecting the evolution of gas-rich galaxies in clusters.
Calculating exactly what effect this has was until now a choice of either using a single, very crude analytical formula, or arbitrarily complicated numerical simulations. Here we present a middle ground - a series of equations describing all the effects of ram pressure under just about every conceivable situation, but, much more usefully, interactive online tools for running simple simulations in a web browser. The full set are available here. But in particular see this one, which lets you control a simple particle simulation of RPS (with sensible default values to get you started).
(and no, I actually don't care if you hate JavaScript, go suck a lemon - I didn't write these anyway)
And it all seems to work pretty well. Results are in good agreement with observations and simulations alike. This makes it possible to calculate quite specific effects of RPS on large, statistical samples of galaxies. For example you can estimate how long a galaxy has been experiencing significant ram pressure and how long you expect its stripped gas tail to be, or how much of its displaced gas it should eventually re-accrete as it falls back to its disc.
Of course this does require some assumptions about the motion through the cluster and the density of the surrounding material etc. so there are plenty of uncertainties, but that's true of more complex simulations as well - and these ones are thousands of times faster, if not more. What they can't handle are galaxies moving edge-on through the gas. For those cases the complex fluid physics of two interacting gases becomes more important, so it doesn't work very well. But in other situations it seems to do a very good job. My next paper will be about observations of galaxies with long stripped tails which the model has successfully predicted.
https://arxiv.org/abs/1806.05887
This paper was in development for an outrageously long time and all the hard work was done by the lead author, who intends it to be his final paper as first author. I only came in for the last few years, to give you some idea of the timescale involved.
Ram pressure stripping is a process which affects galaxies moving through an external medium - usually the hot, thin gas in a cluster. The motion of the galaxy causes the external gas to pile up in front of it, generating a "ram pressure" which can be sufficient to remove gas from the galaxy. Unlike tidal encounters, the faster the galaxy moves, the stronger the effect. This makes it extremely important in clusters where galaxies tend to have extremely high velocities - great for ram pressure stripping, lousy for tidal encounters. The particularly high density of cluster gas means that ram pressure can become strong enough to completely remove the gas even from a giant galaxy, while leaving the stars essentially unaffected. It's thought to be the dominant process affecting the evolution of gas-rich galaxies in clusters.
Calculating exactly what effect this has was until now a choice of either using a single, very crude analytical formula, or arbitrarily complicated numerical simulations. Here we present a middle ground - a series of equations describing all the effects of ram pressure under just about every conceivable situation, but, much more usefully, interactive online tools for running simple simulations in a web browser. The full set are available here. But in particular see this one, which lets you control a simple particle simulation of RPS (with sensible default values to get you started).
(and no, I actually don't care if you hate JavaScript, go suck a lemon - I didn't write these anyway)
And it all seems to work pretty well. Results are in good agreement with observations and simulations alike. This makes it possible to calculate quite specific effects of RPS on large, statistical samples of galaxies. For example you can estimate how long a galaxy has been experiencing significant ram pressure and how long you expect its stripped gas tail to be, or how much of its displaced gas it should eventually re-accrete as it falls back to its disc.
Of course this does require some assumptions about the motion through the cluster and the density of the surrounding material etc. so there are plenty of uncertainties, but that's true of more complex simulations as well - and these ones are thousands of times faster, if not more. What they can't handle are galaxies moving edge-on through the gas. For those cases the complex fluid physics of two interacting gases becomes more important, so it doesn't work very well. But in other situations it seems to do a very good job. My next paper will be about observations of galaxies with long stripped tails which the model has successfully predicted.
https://arxiv.org/abs/1806.05887
Monday, 11 June 2018
ALFALFA sky VR test
I've been wanting to render galaxy flythroughs in VR for ages, but Blender's 1000 image texture limit has been restrictive. Not any more - this has been removed in 2.79. Also there are encoding options which look to me to be giving much better results, though I can't vouch for how well this will translate to YouTube. Here's a fairly small proof of concept test with 5,000 galaxies, detected with (and with distance measurements) from the ALFALFA hydrogen survey. Optical images are from the Sloan Digital Sky survey.
This is designed for VR headsets/Google cardboard. I'm curious what people are using to view them as my own headset (so far as I can tell) cannot get YouTube to play 3D 360 VR correctly, so I just view the original video file instead.
I do these periodically whenever ALFALFA updates their catalogue. The first one, the 30% catalogue, had about 11,000 galaxies. The second (70%) had about 22,000. The final catalogue has 31,000. It should be entirely feasible to render that, the galaxy images use a surprisingly small amount of memory.
More explanations of the renders here : http://astrorhysy.blogspot.com/2013/03/galaxies-are-pretty.html
https://youtu.be/5W1i0Rm1bQY
This is designed for VR headsets/Google cardboard. I'm curious what people are using to view them as my own headset (so far as I can tell) cannot get YouTube to play 3D 360 VR correctly, so I just view the original video file instead.
I do these periodically whenever ALFALFA updates their catalogue. The first one, the 30% catalogue, had about 11,000 galaxies. The second (70%) had about 22,000. The final catalogue has 31,000. It should be entirely feasible to render that, the galaxy images use a surprisingly small amount of memory.
More explanations of the renders here : http://astrorhysy.blogspot.com/2013/03/galaxies-are-pretty.html
https://youtu.be/5W1i0Rm1bQY
Beautifully wrong
The answer to the headline will be obvious to anyone except a beginner : of course it can. Happens all the time.
If we accept a new philosophy that promotes selecting theories based on something other than facts, why stop at physics? I envision a future in which climate scientists choose models according to criteria some philosopher dreamed up. The thought makes me sweat.
That kind of reasoning normally makes me want to physically beat people with a copy of the complete works of Plato, but I shall refrain on this occasion Because Context. Anyway the answer to "where will it stop ?" is always, as John Oliver put it, somewhere.
The philosophers are certainly right that we use criteria other than observational adequacy to formulate theories. That science operates by generating and subsequently testing hypotheses is only part of the story. Testing all possible hypotheses is simply infeasible; hence most of the scientific enterprise today—from academic degrees to peer review to guidelines for scientific conduct—is dedicated to identifying good hypotheses to begin with... It doesn’t relieve us from experimental test, but it’s an operational necessity to even get to experimental test.
In the foundations of physics, therefore, we have always chosen theories on grounds other than experimental test. We have to, because often our aim is not to explain existing data but to develop theories that we hope will later be tested—if we can convince someone to do it. But how are we supposed to decide what theory to work on before it’s been tested? And how are experimentalists to decide which theory is worth testing? Of course we use non-empirical assessment. It’s just that, in contrast to Richard, I don’t think the criteria we use are very philosophical. Rather, they’re mostly social and aesthetic. And I doubt they are self-correcting.
Not sure why it's "Richard" here and not "Dawid" ? Are they on a first-name basis ? Meh. I agree with the sentiment though : we generally choose theories because we like them, not for any higher philosophical reasoning. However (reversing the order of the original text for my ownnefarious narrative purposes) :
He claims that certain criteria that are not based on observations are also philosophically sound, and he concludes that the scientific method must be amended so that hypotheses can be evaluated on purely theoretical grounds. Richard’s examples for this non-empirical evaluation—arguments commonly made by string theorists in favour of their theory—are (1) the absence of alternative explanations, (2) the use of mathematics that has worked before, and (3) the discovery of unexpected connections.
... those all seem like pretty good criteria to me. A theory that cannot be empirically tested but which is intended to be so eventually is a sort of pro-science. Mathematics itself is like this, and no-one blames mathematicians for not doing empirical tests, and woe betide anyone who say's they're irrational.
String theory is currently the most popular idea for a unified theory of the [fundamental physics] interactions. It posits that the universe and all its content is made of small vibrating strings that may be closed back on themselves or have loose ends, may stretch or curl up, may split or merge. And that explains everything: matter, space-time, and, yes, you too. At least that’s the idea. String theory has to date no experimental evidence speaking for it... Arguments from beauty have failed us in the past, and I worry I am witnessing another failure right now.
I don't like string theory, I think it's over-hyped. But I think it falls in more of the grey area between mathematics and science at this point, rather than being something which is (as implied above) obviously wrong.
“So what?” you may say. “Hasn’t it always worked out in the end?” It has. But leaving aside that we could be further along had scientists not been distracted by beauty, physics has changed—and keeps on changing. In the past, we muddled through because data forced theoretical physicists to revise ill-conceived aesthetic ideals. But increasingly we first need theories to decide which experiments are most likely to reveal new phenomena, experiments that then take decades and billions of dollars to carry out. Data don’t come to us anymore—we have to know where to get them, and we can’t afford to search everywhere. Hence, the more difficult new experiments become, the more care theorists must take to not sleepwalk into a dead end while caught up in a beautiful dream. New demands require new methods. But which methods? I hope the philosophers have a plan.
I think there's some Historian's Fallacy at work here. I know a lot of people who are convinced that satellite galaxies orbit in planes around their hosts. They do all kinds of fantastically elaborate theories to demonstrate this, and I'm absolutely persuaded that at least two of them are much more intelligent than me. Nonetheless, they are wrong - the planes just don't exist. What I'm not convinced of is that if they hadn't followed this wrong idea they'd have made some other, more useful contribution to the field instead - it is at least equally possible they'd have become stuck on some other wrong idea instead. Equally, perhaps my own conviction that planes don't exist is holding me back from making a useful contribution to the field. Or, to take it to extremes, I could suggest that all theoretical physicists should be forced to work on cancer research or something. The problem is there's little to suggest they'd be any good at it and megatonnes (literally) of evidence that pure research eventually leads to practical consequences.
There's only so far you can go. Once you've made your arguments and tried to persuade other researchers of their follies, you can't really do more than that. You have to accept other people's views and let them get on with things : ultimately you just disagree.
https://www.scientificamerican.com/article/a-theory-with-no-strings-attached-can-beautiful-physics-be-wrong-excerpt/
If we accept a new philosophy that promotes selecting theories based on something other than facts, why stop at physics? I envision a future in which climate scientists choose models according to criteria some philosopher dreamed up. The thought makes me sweat.
That kind of reasoning normally makes me want to physically beat people with a copy of the complete works of Plato, but I shall refrain on this occasion Because Context. Anyway the answer to "where will it stop ?" is always, as John Oliver put it, somewhere.
The philosophers are certainly right that we use criteria other than observational adequacy to formulate theories. That science operates by generating and subsequently testing hypotheses is only part of the story. Testing all possible hypotheses is simply infeasible; hence most of the scientific enterprise today—from academic degrees to peer review to guidelines for scientific conduct—is dedicated to identifying good hypotheses to begin with... It doesn’t relieve us from experimental test, but it’s an operational necessity to even get to experimental test.
In the foundations of physics, therefore, we have always chosen theories on grounds other than experimental test. We have to, because often our aim is not to explain existing data but to develop theories that we hope will later be tested—if we can convince someone to do it. But how are we supposed to decide what theory to work on before it’s been tested? And how are experimentalists to decide which theory is worth testing? Of course we use non-empirical assessment. It’s just that, in contrast to Richard, I don’t think the criteria we use are very philosophical. Rather, they’re mostly social and aesthetic. And I doubt they are self-correcting.
Not sure why it's "Richard" here and not "Dawid" ? Are they on a first-name basis ? Meh. I agree with the sentiment though : we generally choose theories because we like them, not for any higher philosophical reasoning. However (reversing the order of the original text for my own
He claims that certain criteria that are not based on observations are also philosophically sound, and he concludes that the scientific method must be amended so that hypotheses can be evaluated on purely theoretical grounds. Richard’s examples for this non-empirical evaluation—arguments commonly made by string theorists in favour of their theory—are (1) the absence of alternative explanations, (2) the use of mathematics that has worked before, and (3) the discovery of unexpected connections.
... those all seem like pretty good criteria to me. A theory that cannot be empirically tested but which is intended to be so eventually is a sort of pro-science. Mathematics itself is like this, and no-one blames mathematicians for not doing empirical tests, and woe betide anyone who say's they're irrational.
String theory is currently the most popular idea for a unified theory of the [fundamental physics] interactions. It posits that the universe and all its content is made of small vibrating strings that may be closed back on themselves or have loose ends, may stretch or curl up, may split or merge. And that explains everything: matter, space-time, and, yes, you too. At least that’s the idea. String theory has to date no experimental evidence speaking for it... Arguments from beauty have failed us in the past, and I worry I am witnessing another failure right now.
I don't like string theory, I think it's over-hyped. But I think it falls in more of the grey area between mathematics and science at this point, rather than being something which is (as implied above) obviously wrong.
“So what?” you may say. “Hasn’t it always worked out in the end?” It has. But leaving aside that we could be further along had scientists not been distracted by beauty, physics has changed—and keeps on changing. In the past, we muddled through because data forced theoretical physicists to revise ill-conceived aesthetic ideals. But increasingly we first need theories to decide which experiments are most likely to reveal new phenomena, experiments that then take decades and billions of dollars to carry out. Data don’t come to us anymore—we have to know where to get them, and we can’t afford to search everywhere. Hence, the more difficult new experiments become, the more care theorists must take to not sleepwalk into a dead end while caught up in a beautiful dream. New demands require new methods. But which methods? I hope the philosophers have a plan.
I think there's some Historian's Fallacy at work here. I know a lot of people who are convinced that satellite galaxies orbit in planes around their hosts. They do all kinds of fantastically elaborate theories to demonstrate this, and I'm absolutely persuaded that at least two of them are much more intelligent than me. Nonetheless, they are wrong - the planes just don't exist. What I'm not convinced of is that if they hadn't followed this wrong idea they'd have made some other, more useful contribution to the field instead - it is at least equally possible they'd have become stuck on some other wrong idea instead. Equally, perhaps my own conviction that planes don't exist is holding me back from making a useful contribution to the field. Or, to take it to extremes, I could suggest that all theoretical physicists should be forced to work on cancer research or something. The problem is there's little to suggest they'd be any good at it and megatonnes (literally) of evidence that pure research eventually leads to practical consequences.
There's only so far you can go. Once you've made your arguments and tried to persuade other researchers of their follies, you can't really do more than that. You have to accept other people's views and let them get on with things : ultimately you just disagree.
https://www.scientificamerican.com/article/a-theory-with-no-strings-attached-can-beautiful-physics-be-wrong-excerpt/
Thursday, 7 June 2018
What the people really want
A very interesting list. The good news : the battle for convincing the public of the importance of climatology appears to have been won. The bad news : convincing them of the importance of manned space exploration is having mixed results. Although manned missions beyond LEO appear only at the bottom of the list, research on manned spaceflight on human health appears in the middle. So perhaps the public think it's not safe enough to attempt yet.
The survey also suggests that not much of the American public pays attention to space, with just 7 percent of Americans saying they've heard or read "a lot" about NASA and private spaceflight companies such as SpaceX over the last year.
Which is a useful way to pop my local filter bubble, which is crammed with SpaceX-y goodness. However :
For the first time, Pew also asked several questions about private companies, "such as SpaceX, Blue Origin, and Virgin Galactic," that are developing space exploration capabilities. Strong majorities of respondents had a fair or great amount of confidence these companies would "build safe and reliable rockets and spacecraft" (80 percent), and "control costs for developing rockets and spacecraft" (65 percent).
https://arstechnica.com/science/2018/06/nasas-priorities-appear-to-be-out-of-whack-with-what-the-public-wants/
The survey also suggests that not much of the American public pays attention to space, with just 7 percent of Americans saying they've heard or read "a lot" about NASA and private spaceflight companies such as SpaceX over the last year.
Which is a useful way to pop my local filter bubble, which is crammed with SpaceX-y goodness. However :
For the first time, Pew also asked several questions about private companies, "such as SpaceX, Blue Origin, and Virgin Galactic," that are developing space exploration capabilities. Strong majorities of respondents had a fair or great amount of confidence these companies would "build safe and reliable rockets and spacecraft" (80 percent), and "control costs for developing rockets and spacecraft" (65 percent).
https://arstechnica.com/science/2018/06/nasas-priorities-appear-to-be-out-of-whack-with-what-the-public-wants/
Wednesday, 6 June 2018
Being right is not enough
Some nice examples of correct predictions being a necessary but not sufficient criteria to prove a model is correct (see also http://astrorhysy.blogspot.com/2016/04/perfectly-wrong-or-necessary-but-not.html).
For example, when Niels Bohr predicted in 1913 the correct frequencies of the specific colours of light absorbed and emitted by ionised helium, Einstein reportedly remarked: "The theory of Bohr must then be right."
Bohr's predictions could instantly persuade Einstein (and many others besides) because they were correct to several decimal places. But they came out of what we now know to be a deeply flawed model of the atom, in which electrons literally orbit the atomic nucleus in circles. Bohr was lucky: despite his model being wrong in fundamental ways, it also contained some kernels of truth, just enough for his predictions about ionised helium to work out.
But perhaps the most dramatic example of all concerns Arnold Sommerfeld's development of Bohr's model. Sommerfeld updated the model by making the electron orbits elliptical and adjusting them in accordance with Einstein's theory of relativity. This all seemed more realistic than Bohr's simple model... scientists working in the early 20th century thought of electrons as very tiny balls, and assumed their motion would be comparable with the motion of actual balls.
This turned out to be a mistake: modern quantum mechanics tells us that electrons are highly mysterious and their behaviour doesn't line up even remotely with everyday human concepts. So Sommerfeld's theory had a radical misconception at its very heart. Yet, in 1916, Sommerfeld used his model as the basis for an equation that correctly describes the detailed pattern of colours of light absorbed and emitted by hydrogen. This equation is exactly the same as the one derived by Paul Dirac in 1928 using the modern theory of relativistic quantum mechanics.
Despite the fact that later evidence proved these theories wrong, I don't think we should say the scientists involved made mistakes. They followed the evidence and that is precisely what a good scientist should do. They weren't to know that the evidence was leading them astray.
These few examples certainly shouldn't persuade us that science can't be trusted. It's rare for evidence to be very misleading and, usually, radically false theories don't produce successful, accurate predictions (and usually they produce radically false predictions). Science is a process of constant refinement, with a knack for ironing out unhelpful twists and turns in the long run. And we all know that even the most trustworthy can occasionally let us down.
https://phys.org/news/2018-06-evidence-scientists-decades.html
For example, when Niels Bohr predicted in 1913 the correct frequencies of the specific colours of light absorbed and emitted by ionised helium, Einstein reportedly remarked: "The theory of Bohr must then be right."
Bohr's predictions could instantly persuade Einstein (and many others besides) because they were correct to several decimal places. But they came out of what we now know to be a deeply flawed model of the atom, in which electrons literally orbit the atomic nucleus in circles. Bohr was lucky: despite his model being wrong in fundamental ways, it also contained some kernels of truth, just enough for his predictions about ionised helium to work out.
But perhaps the most dramatic example of all concerns Arnold Sommerfeld's development of Bohr's model. Sommerfeld updated the model by making the electron orbits elliptical and adjusting them in accordance with Einstein's theory of relativity. This all seemed more realistic than Bohr's simple model... scientists working in the early 20th century thought of electrons as very tiny balls, and assumed their motion would be comparable with the motion of actual balls.
This turned out to be a mistake: modern quantum mechanics tells us that electrons are highly mysterious and their behaviour doesn't line up even remotely with everyday human concepts. So Sommerfeld's theory had a radical misconception at its very heart. Yet, in 1916, Sommerfeld used his model as the basis for an equation that correctly describes the detailed pattern of colours of light absorbed and emitted by hydrogen. This equation is exactly the same as the one derived by Paul Dirac in 1928 using the modern theory of relativistic quantum mechanics.
Despite the fact that later evidence proved these theories wrong, I don't think we should say the scientists involved made mistakes. They followed the evidence and that is precisely what a good scientist should do. They weren't to know that the evidence was leading them astray.
These few examples certainly shouldn't persuade us that science can't be trusted. It's rare for evidence to be very misleading and, usually, radically false theories don't produce successful, accurate predictions (and usually they produce radically false predictions). Science is a process of constant refinement, with a knack for ironing out unhelpful twists and turns in the long run. And we all know that even the most trustworthy can occasionally let us down.
https://phys.org/news/2018-06-evidence-scientists-decades.html
Friday, 1 June 2018
My new favourite journal
I'm pretty sure this academic spam email is part of a competition to produce the most predatory journal ever.
Transylvanian Review, (ISSN 1221-1249) is a peer reviewed multi-disciplinary specialist international journal aimed at promoting research worldwide in Agricultuaral Sciences, Biological Sciences, Chemical Sciences, Computer and Mathematical Sciences, Engineering, Environmental Sciences, Medicine and Physics (all scientific fields).
OK, first off.... Transylvania ??? Seriously ? That's not even a country ! It's like having the Swansea Journal Of Anthropology except that it's much worse, because they've picked the one region on Earth whose major defining characteristic is that it's full of vampires. The only way to save this is to double down and put pictures of Dracula on the front cover making terrible vampire-themed puns about the leading article. Well, you'd need to be undead to be able to review "all scientific fields".
Just in case anyone hadn't twigged that this is a scam :
After 12 days Rapid Shite Review Process by the editorial board members or outside experts, an accepted paper will be placed under In Press within 24 hours and will be published in the next issue.
... if you send them a suitable sum of drachmas, I daresay.
Transylvanian Review is Abstracted/Indexed in Thomson Reuters, Social Sciences Citation Index, Arts & Humanities Citation Index, SCOPUS, EBSCO, Ulrich's Periodicals Directory, Scirus, CiteSeerX, Index Copernicus, Directory of Open Access Journals, Google Scholar, CABI, Chemical Abstracts, Zoological Records, Global Impact Factor Australia, J-Gate, HINARI, WorldCat, British Library, European Library, Biblioteca Central, The Intute Consortium, Genamics JournalSeek, bibliotek.dk, OAJSE, Zurich Open Repository and Archive Journal Database.
Impact Factor for 2016 (JCR) = 0.045
All that indexing and for naught. Because it's shite. Or, possibly it just doesn't have enough vampires.
Transylvanian Review, (ISSN 1221-1249) is a peer reviewed multi-disciplinary specialist international journal aimed at promoting research worldwide in Agricultuaral Sciences, Biological Sciences, Chemical Sciences, Computer and Mathematical Sciences, Engineering, Environmental Sciences, Medicine and Physics (all scientific fields).
OK, first off.... Transylvania ??? Seriously ? That's not even a country ! It's like having the Swansea Journal Of Anthropology except that it's much worse, because they've picked the one region on Earth whose major defining characteristic is that it's full of vampires. The only way to save this is to double down and put pictures of Dracula on the front cover making terrible vampire-themed puns about the leading article. Well, you'd need to be undead to be able to review "all scientific fields".
Just in case anyone hadn't twigged that this is a scam :
After 12 days Rapid Shite Review Process by the editorial board members or outside experts, an accepted paper will be placed under In Press within 24 hours and will be published in the next issue.
... if you send them a suitable sum of drachmas, I daresay.
Transylvanian Review is Abstracted/Indexed in Thomson Reuters, Social Sciences Citation Index, Arts & Humanities Citation Index, SCOPUS, EBSCO, Ulrich's Periodicals Directory, Scirus, CiteSeerX, Index Copernicus, Directory of Open Access Journals, Google Scholar, CABI, Chemical Abstracts, Zoological Records, Global Impact Factor Australia, J-Gate, HINARI, WorldCat, British Library, European Library, Biblioteca Central, The Intute Consortium, Genamics JournalSeek, bibliotek.dk, OAJSE, Zurich Open Repository and Archive Journal Database.
Impact Factor for 2016 (JCR) = 0.045
All that indexing and for naught. Because it's shite. Or, possibly it just doesn't have enough vampires.
Subscribe to:
Posts (Atom)
Giants in the deep
Here's a fun little paper about hunting the gassiest galaxies in the Universe. I have to admit that FAST is delivering some very impres...
-
Of course you can prove a negative. In one sense this can be the easiest thing in the world : your theory predicts something which doesn...
-
Why Philosophy Matters for Science : A Worked Example "Fox News host Chris Wallace pushed Republican presidential candidate to expand...
-
In the last batch of simulations, we dropped a long gas stream into the gravitational potential of a cluster to see if it would get torn...