It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Scientific journals accept papers at random

page: 2
6
<< 1   >>

log in

join
share:

posted on Mar, 1 2018 @ 03:02 PM
link   
Science also has a "yard mark" known as a "p-value" which is the minimum significant statistical difference has to be seen in a scientific experiment that makes it worthy of notice.


You can think of the p-value as a “warmer-colder” game for scientific knowledge: a low p-value shouts “warmer,” while a high p-value reads “colder, not much to see here.” It’s a helpful tool, but it’s not perfect. And the threshold that most scientists use is less perfect than some would like.

The p-value was originally devised by French scholar Pierre-Simon Laplace, a contemporary of Napoleon Bonaparte. Sir Ronald A. Fisher later popularized it after a famous experiment involving a lady who could taste differences in virtually identical cups of tea. Fisher proposed a p-value threshold of 0.05, and his ideas laid the groundwork for modern experimentation as it exists today.

PBS.org, NOVA Now - Rethinking Science’s Magic Number.

Unscrupulous science can throw in extreme data to make the p-value test appear to be valid. Laplace just picked a number, 0.05, with no statistical reasoning! Now people think that it should be at least 0.005, from hundredths to thousandths, to be significant. Physics has an even lower p-value, 0.000 000 3, which something like a quadrillionth.

I was wondering why somethings take so long to hit the shelves and kept on seeing, 3-sigma, in the data (this was the neutron star merger). They have to have a certain level of accuracy in the data (usually multiple sources) which includes things like calibration reports to guarantee the veracity of the equipment. All of that take a lot of time especially because multiple teams are involved (there were more than 400 scientists in the neutron star merger, for example). This is mainly why there are little, to no, single-scientist discoveries any more.

For example, that guy who took a picture of the supernova while testing out his camera, remember that? He took the photo in September. A bunch of telescopes then turned to that area of the sky and started recording data. They did the 3-sigma write up, and it was not until this past October (a year later) that they placed the write up on the arXiv. It took 3 weeks to be vetted and was only then published.

Real science takes time. Much to my chagrin *ahem, graphene battery, ahem*



posted on Mar, 1 2018 @ 05:12 PM
link   

originally posted by: vernichter

originally posted by: GetHyped
a reply to: vernichter

>They are journals, they are scientific, they accept at random/

You need to look into this thing called "sample size".

So look into it.

originally posted by: GetHyped
a reply to: vernichter
>You are either not very smart or not very honest.

What exactly is dishonest here? Pick any mid to top tier journal and try submitting any old nonsense and report back.

I see: you are an honest idiot.


Ok champ, go ahead and submit a paper to any credible scientific journal and let us know how you get on.

I'll just wait.



posted on Mar, 2 2018 @ 10:26 PM
link   

originally posted by: TEOTWAWKIAIFF
Laplace just picked a number, 0.05, with no statistical reasoning! Now people think that it should be at least 0.005, from hundredths to thousandths, to be significant. Physics has an even lower p-value, 0.000 000 3, which something like a quadrillionth.

In the present experiment you can't rule out the null-hypothesis of randomly-accepting editors even on 0.3 level.



posted on Mar, 3 2018 @ 04:02 AM
link   

originally posted by: vernichter

originally posted by: TEOTWAWKIAIFF
Laplace just picked a number, 0.05, with no statistical reasoning! Now people think that it should be at least 0.005, from hundredths to thousandths, to be significant. Physics has an even lower p-value, 0.000 000 3, which something like a quadrillionth.

In the present experiment you can't rule out the null-hypothesis of randomly-accepting editors even on 0.3 level.


Then show us all wrong by submitting a paper to any credible journal and getting it randomly accepted.

Pretty easy hypothesis to prove, right? So go ahead and put your money where your mouth is.
edit on 3-3-2018 by GetHyped because: (no reason given)



posted on Mar, 9 2018 @ 12:47 PM
link   

originally posted by: GetHyped
Ok champ

May be such educated and wise scientist as yourself can answer my simple question: how many quarks can sit on the tip of a needle?



posted on Mar, 10 2018 @ 11:28 AM
link   
a reply to: GetHyped

What they showed was that papers with methodological defects (quite feasible in psychology research as it's actually difficult) have a probability of being accepted anyway (the prior paper), and that is plausibly stochastic.

In some way, all papers have limitations and flaws, and it's a judgement and personality of reviewers and editors as to whether it goes through or not.

And---read the original---this experiment was done in 1982. That's a generation and a half ago. The practice and knowledge of methodology in psychological statistics is advanced since then substantially, and the editors and reviewers know it.

Back in 1982 there wasn't full text search.



edit on 10-3-2018 by mbkennel because: (no reason given)



posted on Mar, 10 2018 @ 01:07 PM
link   
I have come up with a scientific theory.
it may be big one day.
who do I tell so no one steals it?
don't want fame or money,
just for it to be know I came up with it.
If I put it on ATS they would claim it!



posted on Mar, 11 2018 @ 12:18 PM
link   

originally posted by: mbkennel
a reply to: GetHyped

What they showed was that papers with methodological defects (quite feasible in psychology research as it's actually difficult) have a probability of being accepted anyway (the prior paper), and that is plausibly stochastic.


And that 8 out of 9 published papers have such defects.


originally posted by: mbkennel
a reply to: GetHyped
Back in 1982 there wasn't full text search.

That's why the experiment was possible back then.




top topics



 
6
<< 1   >>

log in

join