It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
In 2011, the Journal of Personality and Social Psychology published a report of nine experiments purporting to demonstrate that an individual’s cognitive and affective responses can be influenced by randomly selected stimulus events that do not occur until after his or her responses have already been made and recorded (Bem, 2011). To encourage exact replications of the experiments, all materials needed to conduct them were made available on request. We can now report a meta-analysis of 90 experiments from 33 laboratories in 14 different countries which yielded an overall positive effect in excess of 6 sigma with an effect size (Hedges’ g) of 0.09, combined z = 6.33, p = 1.2 × 10-10. A Bayesian analysis yielded a Bayes Factor of 7.4 × 10-9, greatly exceeding the criterion value of 100 for "decisive evidence" in favor of the experimental hypothesis (Jeffries, 1961). Experimental tasks that required "fast-thinking" responses produced larger and more significant effect sizes than did "slow-thinking" tasks that allowed participants time to implement conscious cognitive strategies (see Kahneman, 2011). The number of potentially unretrieved experiments averaging a null effect that would be required to reduce the overall effect size to a trivial value was conservatively calculated to be 520. An analysis of p values across experiments implies that the results were not a product of "p-hacking," the selective suppression of statistical analyses that failed to yield significant results (Simonsohn, Nelson, & Simmons, 2013). We discuss the controversial status of precognition and other anomalous effects collectively known as psi.
In 2011, Bem published the article "Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect" in the Journal of Personality and Social Psychology that offered statistical evidence for psi. The article's findings challenged modern scientific conceptions about the unidirectional nature of time. Its presentation by a respected researcher and its publication by an upper tier journal engendered much controversy. In addition to criticism of the paper itself, the paper's publication prompted a wider debate on the validity of peer review process for allowing such a paper to be published. Bem appeared on MSNBC and The Colbert Report to discuss the experiment.
Wagenmakers et al. criticized Bem's statistical methodology, saying that he incorrectly provides one-sided p-value when he should have used a two-sided p-value. This could account for the marginally significant results of his experiment. Bem and two statisticians subsequently published a rebuttal to this critique in the Journal of Personality and Social Psychology.
After evaluating Bem's nine experiments, psychologist James Alcock said that he found metaphorical "dirty test tubes," or serious methodological flaws, such as changing the procedures partway through the experiments and combining results of tests with different chances of significance. It is unknown how many tests were actually performed, nor is there an explanation of how it was determined that participants had "settled down" after seeing erotic images. Alcock concludes that almost everything that could go wrong with Bem's experiments did go wrong. Bem's response to Alcock's critique appeared online at the Skeptical Inquirer website and Alcock replied to these comments in a third article at the same website.
One of the nine experiments in Bem's study ('Retroactive Facilitation of Recall') was repeated by scientists Stuart Ritchie, Chris French, and Richard Wiseman. Their attempt to replicate was published in PLoS ONE and found no evidence of precognition. Several failed attempts by the authors to publish their replication attempt highlighted difficulties in publishing replications, attracting media attention over concerns of publication bias. The Journal of Personality and Social Psychology, Science Brevia and Psychological Science each rejected the paper on the grounds that it was a replication. A fourth journal, the British Journal of Psychology refused the paper after reservations from one referee, later confirmed to be Daryl Bem, who "might possibly have a conflict of interest with respect to [the] ... submission." Wiseman set up a register to keep track of other replicating efforts to avoid problems with publication bias and planned to conduct a meta-analysis on registered replication efforts.
An analysis by Gregory Francis in Psychonomic Bulletin & Review suggested that the number of rejections of the null hypothesis reported by Bem (eight out of nine experiments) is abnormally high, given the properties of the experiments and reported effect sizes. He calculated that the probability of Bem obtaining such results (0.058) is significantly less than the standard criterion used in tests of publication bias (0.1). According to Francis, this suggests that Bem's experiments cannot be taken as a proper scientific study, as critical data is likely unavailable. Francis also noted that Bem's experiments meet current standards of experimental psychology. Drawing on his own analysis and studies suggesting a discrepancy between the observed and expected null hypothesis rejection rates across the field of experimental psychology, he suggests that the standards and practices of the field are not functioning properly.
The publication of Bem's article and the resulting controversy prompted a wide-ranging commentary by Etienne LeBel and Kurt Peters. Using Bem's article as a case study, they discussed deficiencies in modal research practice, the methodology most commonly used in experimental psychology. LeBel and Peters suggest that experimental psychology is systemically biased toward interpretations of data that favor the researcher's theory.
In 2012, the same journal that published Bem's original experiments, The Journal of Personality and Social Psychology (Vol. 103, No. 6), published “Correcting the Past: Failures to Replicate Psi” by Jeff Galek of Carnegie Mellon University, Robyn A. LeBoeuf of the University of Florida, Leif D. Nelson of the University of California at Berkeley, and Joseph P. Simmons of the University of Pennsylvania. The paper reported seven experiments testing for precognition that "found no evidence supporting its existence.”
Yes, if the 2014 meta analysis turns out anything like the 2011 paper, it too will be engulfed in controversy.
originally posted by: GetHyped
So in addition to the failed replication attempt of Bem's 2011 paper, this also cites a meta-analysis which also failed to find significant evidence for precognition effects.
Another possible hit this week for Professor Daryl Bem's controversial "Feeling the Future" experiments, which found positive evidence for precognition, with the publication of a large-scale replication study which found no psi effect. The full paper, "Correcting the Past: Failures to Replicate Psi", is freely available for download, and I recommend downloading it and having a good read. No doubt many will suffer (as I did) through the more technical descriptions of statistical analysis, but amongst that there is fascinating, respectful discussion of the Bem experiments and this latest replication attempt. In addition to the experiment results, the paper also features a meta-analysis of all replications attempted so far, which found again no significant evidence for precognition effects.
There are, in fact - and this seems not to be widely known - quite a few positive replications of Bem's research. I was hoping you could bring these replications to light, so that public audiences interested in this matter will get all the facts regarding the issue of replicating Bem (2011), and recognize the bias in the view propagated by many pseudoskeptical journalists. If this information was more widely available, the "climate" surrounding the Bem controversy would, perhaps, be a bit different.
Here is a list of several positive Bem replications - these are not all extant conceptually similar "implicit precognition" experiments (which Dean Radin says are under meta-analytic review, presently), but only those studies that specifically replicate the experimental paradigms in Bem (2011):
Batthyany, A. (2010). Retrocausal Habituation and Induction of Boredom: A Successful Replication of Bem (2010; Studies 5 and 7). Social Science Research Network, Working Paper Series.
Franklin, M. S., & Schooler, J. W. (2011). Using retrocausal practice effects to predict online roulette spins. A talk presented at the Society for Experimental Social Psychology, Washington D.C., U.S.A., October, 2011.
Franklin, M. S., & Schooler, J. W. (2011). Using retrocausal practice effects to predict random binary events in an applied setting. A talk presented at Towards a Science of Consciousness, Stockholm, Sweden, May, 2011. [more recently: Franklin, M., and Schooler, J. (2012). Using retrocausal practice effects to predict random binary events in an applied setting. Toward a Science of Consciousness, Tucson X].
Tressoldi, P. E., Masserdotti, F., & Marana C. (2012). Feeling the future: an exact replication of the Retroactive Facilitation of Recall II and Retroactive Priming experiments with Italian participants, Universita di Padova, Italy
Subbotsky, E. (2012). Sensing the future: The Non-standard observer effect on an ESP task. Lancaster University, UK
Bijl, A. & Bierman, D. (2013). Retroactive training of rational v.s. intuitive thinkers. Proceedings of the 56th Annual Convention of the Parapsychological Association.
Parker, A., & Sjödén, B. (2010). Do some of us habituate to future emotional events? Journal of Parapsychology, 74, 99–115.
Savva, L., Child, R. & Smith, M. D. (2004). The Precognitive Habituation Effect: An Adaptation Using Spider Stimuli. The Parapsychological Association Convention 2004, pp. 223 – 229.
Our goal was to identify any direct replication attempts of either Experiment 8 or 9 from
Bem (2011). To that end, we identified 12 replications and included 10 of them in our meta-
analysis (Table 2).
We can now report a meta-analysis of 90 experiments from 33 laboratories in 14 different countries which yielded an overall positive effect in excess of 6 sigma with an effect size (Hedges’ g) of 0.09, combined z = 6.33, p = 1.2 × 10-10. A Bayesian analysis yielded a Bayes Factor of 7.4 × 10-9, greatly exceeding the criterion value of 100 for "decisive evidence" in favor of the experimental hypothesis (Jeffries, 1961).