It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
All sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.
For many scientists, the effect is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved? Which results should we believe? Francis Bacon, the early-modern philosopher and pioneer of the scientific method, once declared that experiments were essential, because they allowed us to “put nature to the question.” But it appears that nature often gives us different answers.
Originally posted by TonyBravada
Ok, NO, it does not hint at Heisenberg Uncertainty. That principle states that one can not exactly know the position and momentum of a particle at the same time; which is due to the methods used to measure the properties of a particle. Most of the world of quantum mechanics is irrelevant to everyday life and macroscopic systems (although a lot of technology is dependent on its concepts.)
Originally posted by iforget
This is old news I am aware but I've never had a chance to read differing opinions on this, I'd love to hears some. Perhaps it is my own ignorance but this seems to hint at the Heisenberg uncertainty principle in some ways. Reality is curious
So acupuncture isn't 100% effective in the East, and 56% effective in the West. Possibly both regions have bias in their published results. It's quite easy to use statistics inappropriately to bias the results.
While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.
ESP tests have big problems with bias.
Originally posted by homeskillet
i would guess that although schooler wasnt trying to study esp and more so his declining effect he may have actually tapped into the bizzare effects of esp and it could possibly be effecting other researchers in other fields.
(p50)
My approach to the problem of experimenter effects has been to minimize the
experimenter’s role as much as possible, reducing it to that of greeter and debriefer, and leaving
the experimental instructions and other interactions with the participant to the computer program.
Moreover, I used several undergraduate experimenters in each experiment and deliberately gave
them only informal training. This was to ensure that the experimental protocols are robust
enough to overcome differences among experimenters so that the protocols have a better chance
of surviving replications in other laboratories. Whether or not this strategy will be successful
remains to be seen.
Finally, the success of replications in psychological research often depends on subtle and
unknown factors. For example, Bornstein’s (1989) meta-analysis of the well-established mere
exposure effect reveals that the effect fails to replicate on simple stimuli if other, more complex
stimuli are presented in the same session. It also fails to replicate if too many exposures are used,
if the exposure duration is too long, if the interval between exposure and the assessment of liking
is too short, or if participants are prone to boredom. As previously noted, the mere exposure
effect had not even been tested with strongly valenced stimuli until Dijksterhuis and Smith
(2002) conducted their habituation experiment, showing that strong positive stimuli actually
reverse the mere exposure effect.
Some of the fundamental flaws are discussed but unless you are well versed in statistics you may not understand some of them, but part of the problem is using inappropriate statistical techniques.
Some scientists say the report deserves to be published, in the name of open inquiry; others insist that its acceptance only accentuates fundamental flaws in the evaluation and peer review of research in the social sciences.
That article also mentions the replication results so far:
Many statisticians say that conventional social-science techniques for analyzing data make an assumption that is disingenuous and ultimately self-deceiving: that researchers know nothing about the probability of the so-called null hypothesis.
In this case, the null hypothesis would be that ESP does not exist. Refusing to give that hypothesis weight makes no sense, these experts say; if ESP exists, why aren’t people getting rich by reliably predicting the movement of the stock market or the outcome of football games?
Instead, these statisticians prefer a technique called Bayesian analysis
This is probably no surprise to the statisticians who said that Bayesian analysis should have been used instead.
So far, at least three efforts to replicate the experiments have failed.
Many young scientists would say today that such a position as you suggest is based too much in a mechanical vision of the world and does not hold up when reduced to what QP indicates.
*
That Pesky Second Link
Q: Does this mean I don’t have to believe in climate change?
A: I’m afraid not. One of the sad ironies of scientific denialism is that we tend to be skeptical of precisely the wrong kind of scientific claims. In poll after poll, Americans have dismissed two of the most robust and widely tested theories of modern science: evolution by natural selection and climate change. These are theories that have been verified in thousands of different ways by thousands of different scientists working in many different fields... Instead of wasting public debate on creationism or the rhetoric of Senator Inhofe, I wish we’d spend more time considering the value of spinal fusion surgery, or second generation antipsychotics, or the verity of the latest gene association study.
People continue to see the inherent flaws in a revolving door system that uses it's own 'peers' for review, themselves in many cases who are beholden to the very system of profit and career over scientific accuracy.
How can any system who's primary goal is to generate profit as a business first and foremost...
...and installs it's own gatekeepers, ever become unbiased and objective?