It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Massive UFO disclosure in USA : A challenge for ATS

page: 16
328
<< 13  14  15    17  18  19 >>

log in

join
share:

posted on Aug, 5 2011 @ 10:27 PM
link   
reply to post by IsaacKoi
 

I mistakenly assumed if I loaded the image into my profile and posted it that everyone could see it, my bad! Here is the link.
Project Blue Book Smithsonian Astrophysical Observatory report from LC 1958
I downloaded all the Socorro files from footnote, too. I live here in NM.
Thanks for the reply,



posted on Aug, 5 2011 @ 10:31 PM
link   
The handwritten note at the bottom of the document says "Discoverer of Pluto, expert on Mars"
The blacked-out name is Clyde Tombaugh.
As a side note, the Universalist church down the street from my apt has a beautiful stained glass window in honor of Tombaugh. Someday when I'm walking home from class at night I'll snap a pic and upload it (the window is lit beautifully at night.)
Cheers



posted on Aug, 6 2011 @ 02:29 AM
link   
reply to post by Xtraeme
 


So, I scored a URL from Flash:

img0.footnote.com...

Notice, no size. If we can get the HASH, we're totally rolling... I'm getting that from a program which analyzes browser caching. Notice, it doesn't require membership to download. One alternative to your technique (but still a monumental hack) might be to automate walking the database and just saving the cached image! I mean all these images are on our hard drives already!


I'll look into starting a little DB of my database walk.



posted on Aug, 6 2011 @ 02:58 AM
link   
Seems like all the metadata you're hoping to get is from a Javascript call, a domain I don't have access to in Perl. I can get this type of data:

www.footnote.com...[0]=project+blue+book&nav=4294966953+4294961629

Project Blue Book - UFO Investigations

… 1968 » July » Brooklyn, New York » Page 132

Very brief, but that's about it - that and of course the record number. Is that going to be helpful?

I'm getting kind of stuck and I don't see any hope of reverse engineering the Hash tag. I'll probably put a few cycles into the Canadian side today, that is much less obfuscated.



posted on Aug, 6 2011 @ 03:17 AM
link   
Isaac, stop spamming ats with your epic work...
Send them to a science tv network like discovery
or something, were they belong. Or a medianetwork
with GOOOD reputation..If your work is true, they
wouldnt hesitate..

You should write them down in a scriptstyle and ask
for a publication...

I belive some 10% actually reads your ENTIRE work.
The rest scim through...

But i would SOAK it up if it were broadcasted....



posted on Aug, 6 2011 @ 03:20 AM
link   

Originally posted by Evanzsayz
Documents? Just like the bible...something written on paper I guess that proves they are real right?

*Facepalm*
Are you really THAT goddamn stupid? How can you put documents at least those who also have audio record of people talking about an object in the sky and the Bible under the same line? It needs no explanation, guess why documents are more credible than the Bible, im not gonna be your brain, use yours, if you have any.

And how about all these sightings are either natural phenomenon or military crafts? That in many cases people see spaceships, crafts is no doubt. But since there is barely any real evidence of greys other than stories and vague documents or pictures, for the sake of sane thinking we can all call them military craft. Hope this satisfies dunces like the one I quoted to stop thinking people didn't see anything and all thousands people were not 'just telling fairy-tales'.
edit on 6-8-2011 by Imtor because: (no reason given)



posted on Aug, 6 2011 @ 03:39 AM
link   

Originally posted by idealord
… 1968 » July » Brooklyn, New York » Page 132

Very brief, but that's about it - that and of course the record number. Is that going to be helpful?


YES
, since we can store the images in a way that allows us to store the records with file names (or in subdirectories) relating to any particular date/location fairly quickly (particularly if we can also OCR the records).

The structure and search functions on footnote.com are quite good (although it is a shame that no-one appears to have thought to store the Project Bluebook case number in association with each group of records, since those case numbers are fairly often used as a reference in better UFO books and in at least one UFO database I own). The problem is having local hard drive - i.e. MUCH faster- access to the images while ideally retaining the Footnote.com storage structure and similar OCR search functions.

All the best,

Isaac



posted on Aug, 6 2011 @ 04:23 AM
link   
K, I'll focus on creating a spreadsheet which'll have the URL for each Footnote document, that brief snippet of text, and the numeric identifier for each doc.

Jeff



posted on Aug, 6 2011 @ 04:43 AM
link   

Originally posted by idealord
K, I'll focus on creating a spreadsheet which'll have the URL for each Footnote document, that brief snippet of text, and the numeric identifier for each doc.


Hi Jeff,

Rather than creating a spreadsheet, I had in mind using those snippets when downloading the associated image so that that the file name is informative (e.g. a jpg or pdf filename of year-month-day-location-page number-numeric identifer). This is definitely my priority.

(Incidentally, I presume that the numeric identifier is the same as the number that appears at the end of a URL, so if the filename or associated data includes that numeric identifier then we can easily give the URL to that image on footnote.com).

However, I can see that a separate spreadheet could also be useful.

I can think of several little statistical projects where such a spreadsheet could be useful. Edit to add: the more I've thought about it in the last few minutes, the more such projects I've thought up.

With a bit of work (hopefully not very much), the spreadsheet could also be used for other purposes. For example, one thing that I've found irritating when browsing the Project Bluebook files is that you have to open each path to find out the number of pages associated with each incident (i.e. each year-month-day-location). Many of the incidents only have a page or two associated with them, and frankly these are largely a waste of time because of the limited amount of information. I'd be interested in sorting the incidents (i.e. year-month-day-location) by the number of pages to identify the largest files and/or filtering the records to identify all those with over, say, 10 or 100 pages. This is something that has never before been done with the Project Bluebook records...

All the best,

Isaac
edit on 6-8-2011 by IsaacKoi because: (no reason given)



posted on Aug, 6 2011 @ 06:21 AM
link   
I just wanted to drop in and say that I just finished reading Hector Quintanilla's manuscript, and it was amazing. It makes one consider the possibility that aliens are not in fact visiting Earth. I highly recommend this as required reading, and if this manuscript speaks the truth, which I believe it does, then the entire history of UFOlogy, and what most have learned or been taught, needs to be re-evaluated or thrown out the window.

I am going to check out the others that you posted as well. Thanks for the information. I also went through some of the Canadian documents, by randomly entering numbers into the url field, and I came across a very interesting hand-written account of a sighting, which had at least six witnesses.

Also, the manuscript I referred to earlier really opened my eyes to the other possibilities when investigating UFO encounters.



posted on Aug, 6 2011 @ 07:07 AM
link   
reply to post by IsaacKoi
 



A spreadsheet (really a database) is the way to go. You can always just flatten it into text and you get the organized columns. Also, right now the search results expose a little data set for every image. For example:

… 1968 » July » Brooklyn, New York » Page 132

and

… [BLANK] » [BLANK] » [BLANK] » Page 7453

Looks like a year, a month, a place and a page.


I'm pretty close now, just have to parse that field and I'll be slurping the whole thing down.

It won't be that big of a database and I might be able to host it and make it searchable with different methods... no promises though!

Jeff



posted on Aug, 6 2011 @ 08:38 AM
link   
OK, it's running... it's doing about 20 records every 10 seconds. I'm parsing the data into RecordID, Year, Month, Place, Page and Processed Raw Data and I'm keeping another file of RecordID and Unprocessed Raw Data.

This data, FWIW, was totally crap. It's got non-web friendly characters in it. Probably all Microsoft formatting stuff... anyways it's running. I'll post a zip file once it's finished, probably in the morning of both a pipe-delimited CSV file and a spreadsheet.

Jeff



posted on Aug, 6 2011 @ 08:53 AM
link   

Originally posted by idealord
This data, FWIW, was totally crap. It's got non-web friendly characters in it. Probably all Microsoft formatting stuff... anyways it's running.


Thanks Jeff.

I wonder if some of the non-web friendly characters indicate the records where day, month, or location are to be shown as being "blank" or "illegible"? This may be wishful thinking on my part...

(The files labelled with a location of "blank" are actually the most interesting to me, since they are generally ones which do not relate to one specific incident but rather relate to statistical analysis or reports on the phenomenon as a whole.)

All the best,

Isaac



posted on Aug, 6 2011 @ 09:06 AM
link   
reply to post by IsaacKoi
 


Heh, well here's where I am:

spreadsheets1.google.com...=0

As you can see:

Column A: Record ID
Column B: Year
Column C: City
Column D: State
Coumn E: Page Number

I got up to 4000 docs before I realized I was missing the Month data.

Take a look and see if you have any suggestions. I'll fix it and start the run again.

Jeff
edit on 6-8-2011 by idealord because: Added some info.



posted on Aug, 6 2011 @ 09:18 AM
link   
Ok, running again: Here's the sample output:

spreadsheets0.google.com...=0

As you can see there's still a few bugs because the data is non-normalized and really crap. When a state is something like D.C. or L.I. the month gets left out... oh well. I'll fix that someday!

Issac, the non-web friendly characters are leftover characters when the original documents were parsed into the Footnote database. They consist of things like Tabs, etc. Not information, all separator values.

Cheers!

Jeff
edit on 6-8-2011 by idealord because: (no reason given)



posted on Aug, 6 2011 @ 09:30 AM
link   

Originally posted by idealord
Column A: Record ID
Column B: Year
Column C: City
Column D: State
Coumn E: Page Number
...
Take a look and see if you have any suggestions. I'll fix it and start the run again.


Hi Jeff,

This looks very promising and should be very useful when we have the images downloaded as well.

Apart from the missing month data, my main comment is that Columns B, C, and D are often left blank when those fields are either illegible or blank, I think it is desirable to have an indication of which it case applies. Column E in the relevant rows indicates that this data is available since the word illegible or blank appears, as appropriate, in that column.

Less significantly, a limited number of the columns seem to be slightly askew with - for example- the abbreviation for New Mexico appearing as N. in one column and M. in another column.

Finally, assuming (which I haven't checked) the record number in column A is the relevant number that appears at the end of the URL to display that image, perhaps that column could be created as including (in addition to that number) a hyperlink to footnote.com which includes the relevant number so that a user can go directly to that page? I should be able to add this afterwards anyway.

All the best,

Isaac



posted on Aug, 6 2011 @ 09:36 AM
link   
Latest (sorry no column names this time):

spreadsheets.google.com...

Because of the inconsistent BLANK BLANK or ILLEGIBLE ILLEGIBLE I tried to parse it so that if a field was either it would just be empty. And that's why I kept the Raw Data there on the left.

And yes, the ID is the basis for the URL for the web page for that image:

ID: 11443600

URL: www.footnote.com...

Writing parsers for inconsistent data is impossible or at best problematic. Plus when you add characters that are meaningless in the middle of them - those >> characters aren't ASCII, they're who knows what, then it's just a mess. I was lucky to be able to separate city and state as often as I did.

I'm afraid at some point it'll come down to volunteers hand-fixing the data (which is why I kept the processed raw on the side).

That's the rotten truth about databases, the normalization process is often grunt work.

Jeff
edit on 6-8-2011 by idealord because: correction

edit on 6-8-2011 by idealord because: correction

edit on 6-8-2011 by idealord because: addendum



posted on Aug, 6 2011 @ 09:43 AM
link   
reply to post by IsaacKoi
 


Thanks for all this info ill be asure to read through it all over the next few days!



posted on Aug, 6 2011 @ 09:52 AM
link   
reply to post by kkrattiger
 


Might be seen as a stupid question but i havent been able to do research on it and the search button here doesn't work for me, here it is: I have seen alot of project bluebook mentioned on forums here but what is it?



posted on Aug, 6 2011 @ 10:59 AM
link   
One tenth of the way there, the first 12,000 documents:

spreadsheets.google.com...



new topics

top topics



 
328
<< 13  14  15    17  18  19 >>

log in

join