It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Global Consciousness Project Challenge

page: 1
2

log in

join
share:

posted on Oct, 15 2013 @ 01:22 AM
link   
I have serious doubts about the so called global consciousness project statistics and data analysis. Statistical methods are meant to be applied to samples, not what amounts to be a census of an accumulated massive database of data. One can of course determine such things as population means and variances, and one can talk about what a sample from such a mass of data may look like. But one cannot use the ENTIRE data set and all running summary data to determine significance of anything.
Looking at accumulated z-scores over a decade or more of data and comparing this accumulated difference to some hypothetical theoretical value is not a statistical test. It is statistical nonsense. These data were collected in a time period and that time period will be shown to be different if ENOUGH data is examined. Such is the nature of randomness.

We can however propose a valid test of hypothesis to see if the next n events of a particular nature indicate what is believed to be a signal of “global consciousness” occurs at a statistically significant rate greater than that which a control signals.

It would, of course, be incredibly important to know if there was an external influence on random number generation. There is at least one known possibility that any random number generator which is dependent on radioactive decay may be affected by the location of the Earth about the sun because the radioactive decay of certain isotopes appear to possibly be affected. Other unknown possibilities may exist. A reasonably constructed control case which is running concurrently with the test case in time can correct for this potential problem.

Proposed test:
To give the GCP people the benefit of the doubt for an obviously complex problem, they should study their existing data to date as thoroughly as desired. They should then propose a type of event that they can monitor and that they also can monitor acceptance criteria that the event has occurred and is valid to include. They should set a number of hours which they will monitor the random number streams, the statistic they propose to use for a signal, and the signal criteria. The signal criteria should include a specific threshold figure to cross plus a minimum amount of time for which the signal must exist before the “global consciousness” event may have been said to have occurred.

They should provide an estimated amount of time for 200 such events to occur. On an agreed upon starting date, they may begin monitoring for those specific events publishing case by case the results as they occur. At the end of the estimated time for the 200 events, they should stop collecting data. For each occurrence, they should record a 1 if they meet the criteria for a signal of "global consciousness," otherwise they will record a 0.

For the control, we will pick random times for sampling, one in each of the 200 equal time windows. These time windows should each be of the same exact length as in the test case. We will form the same exact statistic based on the same random data streams. For the control, we will record a 1 if it meets the same exact criteria for a signal of "global consciousness," otherwise we will record a 0.

Once the data is complete, we can validly compare the results using a Fisher's Exact test. I'll bet there is no significant difference. If there is, then this worth further study. If there isn't, then the website should quit claiming they may be detecting "global consciousness."



posted on Oct, 15 2013 @ 01:39 AM
link   
Depending on whether or not it is felt to be required by the GCP people, the chosen time periods for the control case may be either openly published or published in two sealed repositories only one of their choosing. They may do the calculation for both test and control cases to ensure the algorithm is identical in the case that it has rounding errors or other errors which favor a particular outcome. If the entire data stream is saved, the control case may be calculated after the test case is complete.

The reason this is included in the proposal is that it is possible the GCP proponents, or others, may feel that the "global consciousness" if it exists may alter the random number generation to either avoid detection or to be more easily detected. If this objection to the control is to be raised, it should be raised at the beginning and the sealed repositories utilized. The GCP hypothesis does effectively include a claim of telekinesis, so this may be a desired requirement.



posted on Oct, 15 2013 @ 02:15 AM
link   
you would be wise to research quantum physics side of the GCP before posting a thread like this one.
otherwise you're just bound to be wrong.

and i'm not sure where did you get that 'whole data set' mumbo jumbo from. GCP displays the variance of the CURRENT data coming from the generators. such variance, as you know, should change with time pretty fast under normal conditions - that's how randomness works. of course when you'll take large enough dataset, you'll get a flat line. but that is NOT how GCP operates.
edit on 15-10-2013 by jedi_hamster because: (no reason given)



posted on Oct, 15 2013 @ 02:18 AM
link   
reply to post by jedi_hamster
 


This is about the statistical methods, not the GCP itself. Comparing to a supposed theoretical result is nonsense and using a census as if it were a sample is also nonsense.
edit on 15-10-2013 by BayesLike because: (no reason given)



posted on Oct, 15 2013 @ 02:22 AM
link   
reply to post by BayesLike
 


imho, the only nonsense here is your understanding of how the GCP works.
updated my previous post, unfortunately after you've posted next one.



posted on Oct, 15 2013 @ 02:30 AM
link   
reply to post by BayesLike
 


The analysis consists of essentially everything. As noted here:

GCP Source

The second figure displays the same data as a cumulative deviation from chance expectation (shown as the horizontal black line at 0 deviation). Truly random data would produce a jagged curve with no slope, wandering up and down around the horizontal. The dotted smooth curves show the 0.05 and 0.001 and 0.000001 probability envelopes that indicate significant versus chance excursions. This figure can be compared with a "control distribution" using simulations of the event series.

The jagged red line shows the accumulating excess of the empirically normalized Z-scores relative to expectation for the complete dataset of rigorously defined events. The overall result is highly significant. The odds against chance are much greater than a million to one.



I fixed the source. Look for the second figure.
edit on 15-10-2013 by BayesLike because: (no reason given)



posted on Oct, 15 2013 @ 02:46 AM
link   
reply to post by BayesLike
 


and sure, they're using background data to calculate what is - perhaps due to bad choice of words - called 'expected result'. i would rather call it predicting the chance. with large enough dataset, you'll get a flat line, but with non-infinite amount of data, you'll get some sort of current 'trend'. how much current data variates from that trend (or strongly sticks to it) is what GCP is about, imho. after all, if the network variance is extreme in some period of time, it's hard to call that random.

and your problem with that graph is? it only proves that the data they're getting from the generators are, at some times, hardly random, hence the cumulative result is far from normal random distribution that can be expected at the given amount of time.
edit on 15-10-2013 by jedi_hamster because: (no reason given)



posted on Oct, 15 2013 @ 02:50 AM
link   
reply to post by jedi_hamster
 


This error is similar to the type of error made by the Duke studies on ESP. It's not a good analysis and the results are highly questionable. That does not mean that the GCP hypothesis is wrong, but rather that the data analysis as presented does not support the conclusion.

The Copenhagen model of quantum mechanics would seem to support the GCP, totally understood. But I do not believe, nor have I ever believed, that the usual interpretation of the Copenhagen model is correct. Nor is it required in a statistical sense for an infinitely branching universe to exist. This is a misunderstanding of probability IMO that has plagued Physics for decades -- it is implicitly based in a frequentist definition of probability dating before the 1940's.

This view point was replaced in the mid 1940's with a much more complete and elegant theory that put probability on a sound mathematical basis. The frequentist view of probability is incomplete and inconsistent -- and it requires absurd things, like the Copenhagen interpretation, to make sense. We no longer need that interpretation.



posted on Oct, 15 2013 @ 03:12 AM
link   

jedi_hamster
reply to post by BayesLike
 


with large enough dataset, you'll get a flat line, but with non-infinite amount of data, you'll get some sort of current 'trend'.


The only valid comparison would be to a randomized control over the same time period. You can't make a valid comparison to "theory" because unknown factors may be at play in the generation of the random numbers, discovery of the events, rounding errors, statistical approximations used, and much much more.

In addition, significance will always be found if enough data is examined. Even at millions to one or billions to one odds. The standard deviation for a test is a fixed number in "theory" but you are dividing by root(n). Of course the RNG for a "theoretical result" is not perfectly random -- so significance is guaranteed with large sample sizes.
edit on 15-10-2013 by BayesLike because: (no reason given)



posted on Oct, 15 2013 @ 06:40 AM
link   
reply to post by BayesLike
 


while you are right that there are plenty of factors like possible errors due to rounding and other things, i guess they've tested their software well enough.

on the other hand, you seem to miss one point. with one generator, the whole project would be a pointless hocus-pocus. what they're doing though is tracking multiple generators and the correlation between those results at any given time. when the network variance goes into extreme for a period of time, it's neither coincidence, nor the fault in their method, imho. correlation between the results from multiple generators from all over the world suggests some global external factor influencing those.



posted on Oct, 16 2013 @ 05:13 AM
link   
reply to post by BayesLike
 


The GCP takes up the Challenge!


Hi,
Please let me introduce myself. I am (among many other things) moderator of the GCP Facebook page. I read this interesting discussion (shared it here and there) and wondered how Roger Nelson, the founder and driving force of the GCP thought about your points of critique. I mailed him and here's what he had to say about it:

"
Hi Chiel,

Thanks for the link and for your offer to add to the discussion. The post by BayesLike is polite, but there are some problems (and ironies.) I think the proposed "valid test" is essentially the same as what we actually do. GCP uses a public Hypothesis Registry, which specifies the exact dataset and the statistical tests a priori, and though we do comparisons with theory, the final, formal statistics use empirical parameters for each RNG's behavior. Moreover, in addition to theoretical comparisons, we also show the difference of the formal series from a distribution of "control" series drawn from the non-event database using resampling -- the best known method for determining empirical expectation. Here is a link to the direct resampling comparison:
teilhard.global-mind.org...
And a link to a similar graph using random simulations:
teilhard.global-mind.org...

For anyone who wishes to do so, any proposed test can be performed on the actual GCP data, which are always available for public download.

Best,
Roger
"

and later:
"
Hi Chiel,

I think it would also be useful to provide links to a couple of recent articles which show exactly what we do, and in the process address concerns like those expressed by BayesLike. These articles provide incisive discussion of the nature of the data deviations:

teilhard.global-mind.org...
teilhard.global-mind.org...&N.2008.pdf

I should also note that our formal scientific claims are not about global consciousness but about an accumulation of evidence for non-random correlations between independent random sources across global distances.

BTW, you are welcome to use or quote from these emails.

Best,
Roger
"

I think I am going to post this discussion on Fb also, since linking this site in Fb does not seem to work.

Looking forward to your respons!
Chiel



posted on Oct, 16 2013 @ 01:41 PM
link   
BTW this discussion has spread to the GCP Facebook page:

www.facebook.com...

Read more comments by Roger Nelson and hopefully soon GCP's statistical specialist.

Cheers!



posted on Oct, 16 2013 @ 10:05 PM
link   
reply to post by ChielReemer
 


I sincerely appreciate this response and will review the provided links. It is important to have a project such as this on as sound a statistical footing as possible. I would be personally pleased if there is sound evidence for a possibly unknown process affecting random number generation which is linked to human events! If I can make any suggestions which might strengthen the statistical approach I certainly will.

I do favor a comparison with a statistically independent series of windows using the same RNGs for a number of reasons. The hypothesis I would like to see tested is one of independence of human event occurrence and the behavior of these statistics. An appropriate test for this type of hypothesis is a Fisher's Exact test. Resampling may or may not work if RNGs are affected by human events as the resampling itself could/would be confounded with the test of the hypothesis of independence.

One of the key reasons I would like to see the behavior of the statistics examined in this manner is that any finite series of random or pseudo random numbers will have some structure away from theoretical. Another is that there is some evidence that there may be an effect on random number generation based on radioactive decay which is seasonal in nature. That raises a possibility that human event occurrence and response may be affected by seasonal factors could simply be deviating from theory because both RNGs and event occurrence may both be affected by seasonal effects. Essentially a hidden correlation. This is a very difficult problem to control for and may not be completely controlled for with the proposed random windows. A control which accounts for potential seasonal effects and other hidden factor correlations would be needed to properly handle these issues.

I also favor a very clear hypothesis. Such a hypothesis is likely to be addressed in the more formal work provided in the links but is difficult to find on the GCP website. I'm specifically avoiding a hypothesis test, given the above comments, which compares to theoretical results, simulation studies using a different set of RNGs, a different time period, or a different method of aggregating the statistics. This was the main reason for the specific test I recommended.

Many thanks for the response. I'd reply on Facebook too, but I don't "do" Facebook on principle. I would welcome you linking back to this reply or copying it to Facebook, as desired, if ATS allows.


edit on 16-10-2013 by BayesLike because: (no reason given)



posted on Oct, 16 2013 @ 11:18 PM
link   
BTW: I wanted to add that I don't usually have a problem with resampling methods. They can be very good and are on sound statistical footing in general. However, in this specific case, random number generation is involved with the hypothesis being tested -- is part of the central mechanics of the study -- so it becomes very questionable whether resampling should even be considered. It's a subtle problem, but vital.

Also, using the non-event part of the database as a control is questionable. The null hypothesis is one of independence of the behavior of the statistic(s) and the events. Under the null hypothesis, it does not matter if a random window contains a portion of an identified event or not. Disallowing those types of windows for the control biases the control statistics and thus the test of the hypothesis in an unknown manner. For example: if it did turn out that seasonality affects both the random number generation and occurrence of events but the presence of events does not affect the RNGs, using only non-event windows would exclude the control observations from fully controlling for seasonality.

I can see no way out of this source of bias without allowing possible overlap. Full overlap of an entire event and a same sized random window would be an extremely remote possibility. Thus, random windows without avoidance of the events does not violate the null hypothesis under test, but would potentially affect the power of the test to some extent. Probably not to a large extent if the events are rare and the windows small relative to frequency of occurrence. So, this should be safe and result in minimal loss of the power of the test for an appropriately chosen event type (or types).

It is with some trepidation that I recommended generating a sequence of random windows to use for the control before events are identified, as this will also use random number generation. A blind registration of the window selection was suggested as a possible means of preventing some resulting problems, but it does not fully address all possible issues surrounding using random number generation. It's clearly OK under the null hypothesis of independence, it has an unknown affect on the power of the hypothesis test. There could be a claim of active avoidance of detection or active desire for detection under the alternate hypothesis. A potential for active avoidance of detection could complicate interpretation of a finding that there is insufficient evidence to reject the null hypothesis.

Certainly, selecting windows in a structured manner is not allowable. A random selection within a fixed series of non-overlapping but adjacent windows is equivalent to a uniform Latin Hypercube sample in one dimension, which is known to be unbiased for the CDF. So this is allowable mechanics for the control selection save for the use of an RNG.

I can see no way out of using an RNG for selecting control windows. The best I can come up with at this time is to do it both before the trials begin and blind so that no-one knows the picks until after the final identified event is complete. Suggestions would be appreciated if another method is known that is both free of structure and free of RNGs.



new topics

top topics



 
2

log in

join