posted on Jan, 4 2013 @ 12:17 AM
here is a sample from the event registry
600 df weight Yes 279 Phillies Win World Series 2008-10-30 0:00:00 2008-10-30 02:29:59 1-sec Stouffer Z Chisquare
600 df weight Yes 280 US Election 2008 2008-11-04 20:00:00 2008-11-05 19:59:59 1-sec Stouffer Z Chisquare
600 df weight Yes 281 Mumbai Terror Attacks 2008-11-26 16:30:00 2008-11-27 16:29:59 1-sec Stouffer Z Chisquare
600 df weight Yes 282 Global Orgasm III 2008-12-21 00:00:00 2008-12-21 23:59:59 1-sec Stouffer Z Chisquare
Here is my pseudo whatever I am impression of this. Its random data tied to subjectively selected "events". Selection of the "global events" is key
here and seems to be the topic that is danced around the most on thier site. This is most likely where the whole thing falls apart. How something
qualifies as an event is perplexing. How is this decided and by whom? Global Orgasm day? isnt that every day?
Another comment,or impression I get is that they are very wordy and technical instead of giving a straight answer. In other words, you have to know
the lingo or pretend like you do and accept it. Its a good technique to throw off those that dont know the language. Being in IT, I do this often.
Most people accept any answer you give them if it sounds too technical for them. "sorry I cant fix your computer becuase your ram in the motherboard
ate 10 gigs hardrive space currupting the boot files. its microsoft"
from the faq
Have you picked a random event as a control and done an analysis of randomness based on that? If so, where might I find the results?
There is a full description of the statistical characterization of the data on the GCP website under the "scientific work" set of links. You can start
with EGG data archive.(broken link) Some relevant points are discussed elsewhere in this FAQ
It is possible to pick a control "random event" as you suggest, but that is not a satisfactory way to address the implied question. We use the more
powerful techniques of random resampling, permutation analysis, and full database statistical characterization. For some purposes, e.g., to establish
statistical independence of measures, we use simulations and modeling.
some things I have to brush up on...
full database statistical characterization
The brief answer to your question is that the normalized data do show expected values across the full database in all moments of the appropriate
statistical distributions. The same is true of the individual physical random sources and their composite, and it is true for each of the measures we
use in the hypothesis testing. This is the background. Against that background, the replicated hypothesis tests in the formal series show departures
from expectation, with a composite Z-score of about 4.5 (May 2007).
the brief answer? I learned absolutely nothing from this response
and then the folks that DO know better are left to sort through a mountain of data just to prove that its BS.
So it seems like they have all their bases covered. well done.
edit on 4-1-2013 by ZetaRediculian because: (no reason
edit on 4-1-2013 by ZetaRediculian because: (no reason given)
edit on 4-1-2013 by ZetaRediculian because: (no