It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Peer Review Tyranny

page: 11
22
<< 8  9  10    12  13  14 >>

log in

join
share:

posted on Aug, 11 2014 @ 03:17 PM
link   
 




 



posted on Aug, 11 2014 @ 03:20 PM
link   
 




 



posted on Aug, 11 2014 @ 03:22 PM
link   
***ATTENTION***

That is enough of that. The back and forth about each other stops immediately, or Posting Bans will be handed out. This thread is not about each other, it is about Peer Reviewing, so let's keep it to that.

Do not reply to this post.



posted on Aug, 11 2014 @ 03:50 PM
link   
.
edit on 11-8-2014 by HarbingerOfShadows because: (no reason given)



posted on Aug, 11 2014 @ 03:58 PM
link   

originally posted by: HarbingerOfShadows
[email protected]


Your link is to "Scientists 'bad at judging peers' published work,' says new study," on mphys.org. Evidently mphys.org is Phys.Org Mobile. Does anyone know what that means?

It apparently appeared in an open access journal, PLOS Biology, PLOS meaning Public Library of Science.

Also, what is an open access journal?



posted on Aug, 11 2014 @ 04:14 PM
link   

originally posted by: HarbingerOfShadows
[email protected]


At the bottom of the page for your link, there is another link for more information.

It links to plosbiology.org, "The Assessment of Science: The Relative Merits of Post-Publication Review, the Impact Factor, and the Number of Citations" by Adam Eyre-Walker and Nina Stoletzki.

Here is the Abstract:


The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative.

www.plosbiology.org...



posted on Aug, 11 2014 @ 04:59 PM
link   
a reply to: Mary Rose

Yeah, what that means is that if you publish in journals with higher impact factors, your papers get cited more. What are journals with higher impact factors? Those which on average have papers which get cited more. [and it's true that if you're writing a paper, you had better cite any relevant article in Nature by the big shot in the field, because that big shot might be reviewing your paper/grant/tenure someday]

And there is an implied assumption that number of citations is a proxy for merit, which is fairly weak.

It's a proxy for being interesting x controversial x well_known_author x well_known_journal x being_in_a_currently_hot_field.

There could be a profound and conclusive discovery or validation, but because that doesn't impact other people's ongoing research stream, it doesn't get cited that much. (And if something is truly important, it becomes so well known citations are unnecessary. Nobody cites the 25 founding articles of quantum mechanics much any more---at most you might see a reference to a standard textbook like Landau+Lif#z or Sakurai.)

The abstract also makes a conclusion that if the correlation between two reviewers on score is low, then that means that humans don't have a way of assessing merit.

And then advocates 'impact factor' which is a proxy for "Nature, Science, and Cell" , where yes, two or three humans (peer reviewers plus an editor) do exactly the same thing (read over by two or three reviewers) which was just declared to be inadequate. The main difference in the top journals is that most papers still get rejected even if they don't have any significant scientific flaws---they just aren't cool enough.
edit on 11-8-2014 by mbkennel because: (no reason given)

edit on 11-8-2014 by mbkennel because: (no reason given)



posted on Aug, 11 2014 @ 05:12 PM
link   
a reply to: ErosA433

In practice the tough exam is the candidacy examination, the defense (at the end) should usually have most of the bugs worked out.

Peer review is fairly successful at improving the readability of papers to outside participants, and rejecting those that are sufficiently confusing.

During a peer review stage, it's often very difficult to distinguish genius or very important from "OK", oftentimes the importance may not be recognized at first or the influence requires additional work and understanding to bear fruit.

A peer review, however does distinguish nonsensical garbage from mediocre and up. Plenty of scientific papers turn out to be wrong, and that's OK. The "not even wrong" garbage has to go.
edit on 11-8-2014 by mbkennel because: (no reason given)



posted on Aug, 11 2014 @ 06:33 PM
link   
a reply to: mbkennel

thanks.....pretty much sums up that a refusal to peer review a new discovery that is claimed to have occurred .....probably denotes fraud.

as well as any paper based on that claim.....



posted on Aug, 11 2014 @ 06:40 PM
link   

originally posted by: HarbingerOfShadows
[email protected]



originally posted by: Mary Rose
Your link is to "Scientists 'bad at judging peers' published work,' says new study," on mphys.org. Evidently mphys.org is Phys.Org Mobile.


From the link:


Prof. Eyre-Walker and Dr Nina Stoletzki studied three methods of assessing published scientific papers, using two sets of peer-reviewed articles. The three assessment methods the researchers looked at were:

• Peer review: subjective post-publication peer review where other scientists give their opinion of a published work;

• Number of citations: the number of times a paper is referenced as a recognised source of information in another publication;

• Impact factor: a measure of a journal's importance, determined by the average number of times papers in a journal are cited by other scientific papers.


The findings, say the authors, show that scientists are unreliable judges of the importance of a scientific publication: they rarely agree on the importance of a particular paper and are strongly influenced by where the paper is published, over-rating science published in high-profile scientific journals. Furthermore, the authors show that the number of times a paper is subsequently referred to by other scientists bears little relation to the underlying merit of the science.

As Eyre-Walker puts it: "The three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased and expensive method by which to assess merit. While the impact factor may be the most satisfactory of the methods considered, since it is a form of prepublication review, it is likely to be a poor measure of merit, since it depends on subjective assessment."

m.phys.org...



posted on Aug, 12 2014 @ 06:34 AM
link   

originally posted by: Mary Rose

The third quote emphasized what is at stake when a reviewer peer reviews. Maybe that could be called “peer review conflict of interest.”

But it's all peer review tyranny if the end result is a procedure that is not working to safeguard scientific progress due to its absolute power.


Again, the third quote is:



originally posted by: MagoSA

. . . Peer review is a non-objective review of a scholar's paper, data, methodology, and conclusions that is submitted to multiple other scholars in the same discipline as the submitters in order to verify and authenticate the material contained.

As anthropologists can attest, the peer review does not ensure honesty and objective consideration of data and results. Often, peer review will reject cutting-edge material for a number of reasons, the most common of them being that they do not match what the reviewer has invested in his/her own research and conclusions. . . .

www.abovetopsecret.com...

My question is: How can it be any other way?

Why would a reviewer not want to protect his or her best interests?

This seems to be common sense to me.



posted on Aug, 12 2014 @ 01:27 PM
link   
a reply to: Mary Rose

I once heard of this scientific team that pushed a claim.....they hired their own peer review team to keep their data inside.....


...wonder how common this practice is.



posted on Aug, 12 2014 @ 01:28 PM
link   
a reply to: biffcartright

Source?



posted on Aug, 12 2014 @ 01:33 PM
link   
a reply to: biffcartright

That's interesting.

How would one go about doing that?

Can you remember any of the details?



posted on Aug, 12 2014 @ 01:37 PM
link   
if one does not like being rejected,
one can submit at vixra ,
where one does not need a sponsor,
nor is anything peer reviewed.
but also nor is anything accurate or correct.



posted on Aug, 12 2014 @ 01:41 PM
link   

originally posted by: biffcartright
they hired their own peer review team to keep their data inside


That makes me think of pharmaceutical companies being allowed to pay for the testing of drugs they want approved by the Food and Drug Administration (FDA).

I am of the opinion that that is a corrupt system on the face of it.

The public safety should be paid for by the tax payer and whoever tests the drug should answer only to the government because otherwise they're under too much pressure to please their client.

I think that's analogous.



posted on Aug, 12 2014 @ 02:06 PM
link   
a reply to: Mary Rose

Again, this has nothing to do with peer-review. It's about judging the importance of a paper using different metrics.



posted on Aug, 12 2014 @ 02:21 PM
link   

originally posted by: krash661
but also nor is anything accurate or correct.


When you say that it could be as compared to official mainstream science dogma, including theory which masquerades as fact.



posted on Aug, 12 2014 @ 02:31 PM
link   

originally posted by: Mary Rose

originally posted by: krash661
but also nor is anything accurate or correct.


When you say that it could be as compared to official mainstream science dogma, including theory which masquerades as fact.

no, this is a typical crank/crackpot comment.
and it appears what this individual is referring to,
" including theory which masquerades as fact. "
is a typical misconception.
which is usually spewed by cranks or individuals who just do not understand or know the actual process,
which also, they do not want to take the time to understand, but yet rant and rave about it.



posted on Aug, 12 2014 @ 02:45 PM
link   
a reply to: Mary Rose

Right, because there's no competition between scientists or their egos. It's a bunch of guys and gals pretending to go to a scientific "conference" where instead of debating the merits of published work they actually just drink Martinis and gin and tonics while holding hands in a circle singing kumbaya.

The fact of the matter is that the merits of papers are quite heavily debated and I've been at more than one lecture or conference where I thought a physical altercation was about to break out between parties.

Theories do not masquerade as fact and at this stage of the game I would have thought even you were able to differentiate between a laymans theory which is the equivalent of Scooby Doo, Shaggy, Fred and the gang having a hunch versus the definition of a scientific theory that has an entirely different set of stringent standards for something to make it beyond the hypothesis stage into accepted theory.


A scientific theory is a well-substantiated explanation of some aspect of the natural world that can incorporate facts, laws, inferences, and tested hypotheses. A scientific theory is differentiated from a hypothesis in that a theory must explain actual observations.

edit on 12-8-2014 by peter vlar because: (no reason given)



new topics

top topics



 
22
<< 8  9  10    12  13  14 >>

log in

join