It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
In a sting operation conducted by the journal Science, contributing correspondent John Bohannon uncovered a "Wild West" landscape among fee-seeking publishers -- a portion of which use false addresses, false names, overseas bank accounts and superficial "peer reviews" on a routine basis.
"From humble and idealistic beginnings a decade ago, open-access scientific journals have mushroomed into a global industry, driven by author publication fees rather than traditional subscriptions," wrote Bohannon, a molecular biologist and science reporter.
"Most of the players are murky," he wrote. "The identity and location of the journals' editors, as well as the finacial workings of their publishers, are often purposefully obscured."
Hoping to test the academic rigor of these journals, Bohannon concocted a false and fatally flawed study on a wonder cure for cancer. Variations of the paper, which were sent to 304 journals, contained experimental blunders that should have been detected during a proper review.
The statistical error that just keeps on coming
The same statistical errors – namely, ignoring the "difference in differences" – are appearing throughout the most prestigious journals in neuroscience
We all like to laugh at quacks when they misuse basic statistics. But what if academics, en masse, deploy errors that are equally foolish? This week Sander Nieuwenhuis and colleagues publish a mighty torpedo in the journal Nature Neuroscience.
They've identified one direct, stark statistical error so widespread it appears in about half of all the published papers surveyed from the academic neuroscience research literature.
And, of course, a peer reviewed article doesn't have to be an intentional hoax to be wrong either.
That's were the independent replication and verification part of science comes into it.
edit on 10/3/2013 by Phage because: (no reason given)
Please specify where I said it doesn't matter. You seem to have set up a straw man that you can knock down at your pleasure.
This discredits the entire process and while you may think it doesn't matter, there are many that will and do.
No, it doesn't.
It shows the peer review process, which is meant to be a form of professional self-regulation that lends professional credibility to a scientific endevour, has become rotten to the core.
I'm glad to see somebody gets it.
The more important thing is to actually look at the journals that apparently reviewed and accepted it. Now I am going to make a small leap and make a statement that the point of the whole thing was to expose these journals as being frauds.
That would be a better test of peer review, to send it to journals that actually do peer review, but the problem I see with sending it to hundreds of journals is that it would probably end up going to some of the same reviewers from different sources, wouldn't it?
I'd like to see Science repeat this experiment, with a slightly more believable but equally false paper (say, based on quantum physics), and send it to hundreds of paid journals. Then publish the names of which ones accepted it.
Yesterday, Science Magazine published a news story (not a peer-reviewed paper) by Gonzo-Scientist John Bohannon on a sting operation in which a journalist submitted a bogus manuscript to 304 open access journals (observe that no toll access control group was used). Science Magazine reports that 157 journals accepted and 98 rejected the manuscript. No words on any control groups or other data that would indicate what the average acceptance rate for bogus manuscripts might be in general.
As Michael Eisen points out, this story is merely the pot calling the kettle black, when Science Magazine is replete with bogus articles (such as that on #arseniclife, for instance) and the magazine has one of the highest retraction rates of the entire industry. Which brings me to the main point of this post: it should come as no surprise that Science Magazine publishes a news story on an ill-conducted sting operation, an anecdote without proper controls – that’s what glamor magazines like Science, Cell or Nature do. The data that we have on this fact are quite unequivocal: hi-ranking journals like these retract many more papers than any other journal and a large fraction of these are retracted because of fraud. There is not even a single quality-related metric in the literature that would confidently express any advantage, quality-wise, of hi-ranking journals over others. However, there are a number of metrics which suggest that, in fact, the quality and reliability of the science published in these Glam Magz is actually below average.
To make things worse, when we submitted this data to Science Magazine, they rejected it with the remark that “we feel that the scope and focus of your paper make it more appropriate for a more specialized journal”. Obviously, Science Magazine values anecdotes more than actual data. No surprise their retraction rate is going through the roof: rejecting data that make them look bad and publish anecdotes that make them look good.