It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Massive UFO disclosure in USA : A challenge for ATS

page: 15
328
<< 12  13  14    16  17  18 >>

log in

join
share:

posted on Jul, 28 2011 @ 05:28 AM
link   
Found this on footnote, bluebook last night. A report from 1958: "I'll admit that calling everything a bright meteor is a very handy thing to do but, until we have definite evidence that it is something else, I would still rather favor that explanation." Don't know who wrote it, as I didn't get the next page(s).
The rest of the paragraph, and the whole report, are interesting as well.



posted on Aug, 2 2011 @ 02:05 PM
link   
Hi my name is fairwarrior i am a new member at ATS and have been reasearching ufos for a while now. I got an interest after various encounters and sightings. this is a video that I filmed july this year can anyone give me any feedback please. youtu.be...



posted on Aug, 2 2011 @ 06:45 PM
link   
Bit off topic but there is a base in western australia called lancelin trainig ground,I always wanted to see this base since i found out about it,now being used by the US miliary right on the ocean I thought this would be a great place for the US to run its tests

I went to Australia last year hired a car and went up there,there is a new road just finished las september which runs through that base its,its like the new Alien highway,very creepy actually slept there for the night very scary place,you could see big lights shining all night so a lot of activity going on

I thought if i would see a ufo anywhere near a base it would be here and I actually did,off the coast of Lancelin beach a yellow ball hovering above the sea probably 10 to twelve miles out at a guess.It was moving up and down and then did the usual dancing around a bit

Maybe someone else knows more of this base and has seen ufo's there also




edit on 2-8-2011 by eyeswilldeceive because: (no reason given)

edit on 2-8-2011 by eyeswilldeceive because: (no reason given)

edit on 2-8-2011 by eyeswilldeceive because: (no reason given)



posted on Aug, 3 2011 @ 12:17 PM
link   
reply to post by IsaacKoi
 


Deputy Lonnie Zamora sighting near Socorro, New Mexico was too good to be true.
And why all the truth was taken out of the official report.
No rocket engine would be visible in any ET ship.
Normal sized people was changed to small to fit the story line of ETs.
And so good over active agents are putting out more copy to dismiss any event
which is now changed to no such thing as UFOs.
Spinning reports around the truth about the power in the environment that the
authorities can't handle. Who grabbed the cosmic power first, Tesla that who.



posted on Aug, 4 2011 @ 02:25 PM
link   

Originally posted by zorgon
Did you ever do that thread on the Canadian files? I believe you said you had a quick d/l for that one?


Hi Zorgon,

After some delay (since, as you'll see, the thread got a bit bigger than I'd originally planned), the thread on the Canadian files is at the link below:

Canadian disclosure: “UFO Found” and other documents/photos

All the best,

Isaac



posted on Aug, 4 2011 @ 10:11 PM
link   
Hi Isaac, amazing amount of work you are putting into this, im afraid ive not really got as much time to go through all of the info that your putting up here and ive hit a bit of a stumbling block and thats that pesky Tesla.

im assuming ( i know its wrong) that you have incorporated this guy into your research and would love to read your thoughts on him as i dont know what to believe about him, most say he was mad and then dismiss him which i find annoying but i believe most if not all geniuses are mad so id be a tad dissapointed if he wasn't,

what really worries me about him was his aristocratic background which would imply to me the possibility of either a disinformatist or he was chosen to give the inventions to the world that would progress society when it was needed.

as i say id love to be able to read your thoughts or conclusions about him and then maybe i could understand the ufo angle a bit better than i already do thanks in advance. Jinni



posted on Aug, 5 2011 @ 01:05 AM
link   
And thanks again to Isaac for this thread and bringing this into our attention! There is a tool which can download entire pages, you may try this out: www.chip.de...

(HTTrack is the name of the program
)!
edit on 5/8/11 by Dalbeck because: (no reason given)

edit on 5/8/11 by Dalbeck because: (no reason given)



posted on Aug, 5 2011 @ 08:46 AM
link   
Just poking around a little and I found this equivalent:

www.footnote.com...

A Mobileviewer is mentioned. I got it to work with the Toms River pic by inserting it's ID number:

I transformed this:

www.footnote.com...|7276044

into:

www.footnote.com...

On looking at the source for the Mobileviewer I found this img ref:

img4.footnote.com...

Which outputs the image with no HTML.

Hope this helps. I've written a lot of scrapers and spiders (in perl) and this should be an easy task...

Jeff


edit on 5-8-2011 by idealord because: Made another little breakthrough.



posted on Aug, 5 2011 @ 09:04 AM
link   
Sorry, if I'm over-stepping, and I have not read this whole thread. I'm going to try to write a scraper that will access this listing:

www.footnote.com...[0]=project+blue+book&nav=4294966953+4294961629

Which produces 124,826 images.

I'm going to walk those search results and attempt to download each pic.

Is this what you guys are hoping to do?

Jeff



posted on Aug, 5 2011 @ 10:00 AM
link   
Downloading the images now... Where should I put them?

No way I can host all of them, but here's what I've got dl'ed in the past few mins:

parnasse.com...

Jeff
edit on 5-8-2011 by idealord because: added a url



posted on Aug, 5 2011 @ 10:59 AM
link   

Originally posted by idealord
Sorry, if I'm over-stepping, and I have not read this whole thread. I'm going to try to write a scraper that will access this listing:

www.footnote.com...[0]=project+blue+book&nav=4294966953+4294961629


Hi idealord,

Many thanks for your posts. You are CERTAINLY not over-stepping.

Your approach sounds very interesting, if a bit beyond my technical understanding. Could I impose on you to explain your approach a bit more, in terms a lawyer could understand?


I would not expect you to wade through all the posts in this thread, but I'd suggest at least glancing through the few posts by Xtraeme about writing a screen-scraping macro - particularly since he seems to have identified a way of getting the footnote website to provide the month, day and - from memory- location relating to each page which would of course be very useful when organising the images which - as I understand it - your approach may actually download more efficiently than Xtraeme's approach.

At first glance (and with my limited technical understanding) it seems that it may be possible to combine your approach to download the images with Xtraeme's knowledge about obtaining data associated with each image so we can get the images efficiently AND be able to save them with some sort of informative file names.

In short, it sounds promising insofar as I can understand it and I hope you and Xtraeme can have an exchange of ideas/thoughts so that we can get the most efficient and effective approach.

All the best,

Isaac
edit on 5-8-2011 by IsaacKoi because: (no reason given)



posted on Aug, 5 2011 @ 11:05 AM
link   
No problem... I searched the Footnote database for Project Blue Book listing with a Perl program. That produced a series of pages, each one with 20 results. I saved the ID's behind each of those results and loaded them into that Mobileviewer URL and then looked for the img tag and downloaded the image.

Perl is doing everything. It's called web automation and has been around for a while. I used to do this professionally... So, the Perl program, hits the databse, gets the results, parses the web page looking for specific type of ID's then loads that page into the Mobileviewer page, looks for the image tag and slurps down the image.

Each image has the ID with the search results records, so it should be easy to recreate the data listing.

Jeff



posted on Aug, 5 2011 @ 11:17 AM
link   
Oh well, it just crapped out at 901, so I'll have to run it again... I've got the first 900 or so images though... No problem, I just need a glass of wine now!


Jeff



posted on Aug, 5 2011 @ 01:09 PM
link   

Originally posted by idealord
I've got the first 900 or so images


Great. It certainly sounds like you've found a more efficient way to get the images themselves, although I'd still like to combine your technique (if at all possible) with Xtraeme's method of getting the date/location information at the same time as the image, so we can save the documents in relevant directories and subdirectories (aking to the path to the images on footnote.com).



I just need a glass of wine now!



It sounds well deserved.

All the best,

Isaac



posted on Aug, 5 2011 @ 01:21 PM
link   
reply to post by idealord
 

Hey Jeff, I tried the mobileviewer approach myself, but unfortunately it outputs lower quality images.

For a comparison see the mobile image (698 x 1024 @ 69KBs):

files.abovetopsecret.com...

Versus the full image (3888 x 5695 @ 1.9MBs):

postimage.org...

For the purpose of OCR, especially with FOIA documents that are frequently borderline illegible, higher quality images are preferable. It's unfortunate, but the only way I can see to get the full images is through the flash interface.

Another approach to download the content that I tried a day ago, involved making few small modifications to a flash interpreter so I could hook the actionscript directly rather than having to fake the input. This had an adverse side-effect though because it wanted user confirmation to write to disk. Due to this I've decided I'm just going to stick with the macro approach.

The script is basically working. The way I went about it is it first grabs some input from the user to find the coordinates for the 'Download" button. From this it calculates all the other button locations on the page. Then it just runs through the routine as described here. One thing I'm working on at the moment is getting all the metadata for each of the images. So the files aren't just random strings. I'd also like to set it up to remember where it starts and stops. This way the task can be distributed amongst a group of people if necessary and write the data to a common repository (FTP/WebDAV). Though I'll probably save that for last.

That's it for now!
-Xt

edit to add:
Just an FYI I also tried using the GET approach by inserting the correct width and height (i.e.width=3888&height=5695 ... img4.footnote.com... ), but it throws an exception. So, unfortunately, the simple wget style method isn't likely to work. Though I'd be happy to be proven wrong!

edit on 5-8-2011 by Xtraeme because: (no reason given)



posted on Aug, 5 2011 @ 02:23 PM
link   
Ugh... yeah I tried changing the Mobileviewer image size values and got the same servlet error. If we just knew what the Hash system was doing, this would be easy!

FWIW, I used to use a Japanese program that would record mouse movements to script that kind of stuff. It seems like you decompiled the Flash and it's well-obfuscated?

Another trick I've used is to use a local proxy of some sort, to capture all GET requests. Maybe I'll try that in the morning...

FWIW, I can get you a complete database listing of all the URL's for each of the images, if that'd be helpful. I can easily write something that'll capture each URL's metadata and annotations. Anything that's not Flash is easy to grab with the Perl Mechanize lib and the HTML:TokeParser libs.

I got down 3200 images at 1280 resolution and after reading your response have stopped running it.



posted on Aug, 5 2011 @ 02:52 PM
link   

FWIW, I can get you a complete database listing of all the URL's for each of the images, if that'd be helpful. I can easily write something that'll capture each URL's metadata and annotations. Anything that's not Flash is easy to grab with the Perl Mechanize lib and the HTML:TokeParser libs.

That would be great. Having a simple way to call GetSubjectOrLocation(), GetMonth(), GetYear(), and GetIncidentNumber() would be handy. I'd also like to write out a .txt file for each image with the details from "About image" panel. The more information the better. I've been trying to think up a way to automate importing the contents into a single PDF using PDFLib, but I'm not sure how well it work.


Another trick I've used is to use a local proxy of some sort, to capture all GET requests. Maybe I'll try that in the morning...

Couldn't hurt.



FWIW, I used to use a Japanese program that would record mouse movements to script that kind of stuff. It seems like you decompiled the Flash and it's well-obfuscated?

Yep, it was a little ugly (probably largely due to the tool I used) but they basically generate the images on the fly through the servlet. It's possible if I spent a bit more time with it we might be able to figure out how they generate the hash. That would be a boon because then we could do everything using mod_perl.


I got down 3200 images at 1280 resolution and after reading your response have stopped running it.

It could still be useful. Perhaps Isaac would like two data sets to work from? Maybe one as CBRs and the other as PDFs? My main goal is to get all the content OCR'ed. Without this it's just too much data to manually sift through. Since I know it's going to take probably a month or two of dedicated computer time to OCR all 15,000+ docs. I'm not sure I have the patience to do it twice.
edit on 5-8-2011 by Xtraeme because: (no reason given)



posted on Aug, 5 2011 @ 04:38 PM
link   
reply to post by kkrattiger
 


I realize this thread is for the problem of getting the bluebook footnote pages downloaded, but I was just wondering if anyone read my post at the top of this page.... In the bottom right corner, note "discoverer of Pluto" handwritten note.

And the last paragraph says (paraphrased) "we should keep a list of the things seen because at a later date, intelligence may reveal some strange things were up there"
edit on 5-8-2011 by kkrattiger because: Added last sentence



posted on Aug, 5 2011 @ 06:22 PM
link   

Originally posted by Xtraeme
Perhaps Isaac would like two data sets to work from? Maybe one as CBRs and the other as PDFs? My main goal is to get all the content OCR'ed. Without this it's just too much data to manually sift through.


Hi Xtraeme (and Jeff and others)

My priorities appear - for fairly obvious reasons - to be the same as yours i.e. getting the best quality copies of the pages possible, with as much associated data as possible (ideally with some of that data being used to store the images with meaningful file names and/or in relevant subdirectories) and OCR'ed to assist with searching.

I'm not sure that a second, lower-quality, set of the images will help - although if the difficulties with getting the higher resolution images exhausts your patience then I'd certainly prefer somewhat lower quality images than none at all.

All the best,

Isaac



posted on Aug, 5 2011 @ 06:24 PM
link   

Originally posted by kkrattiger
reply to post by kkrattiger
 


I realize this thread is for the problem of getting the bluebook footnote pages downloaded, but I was just wondering if anyone read my post at the top of this page.... In the bottom right corner, note "discoverer of Pluto" handwritten note.


Hi kkrattiger

I saw the post you mention, but I could not see a link to (or reference for) the relevant documents. Without seeing the full image, in context, then I couldn't give any proper comment.

Do you happen to have a link or reference for that document?

All the best,

Isaac



new topics

top topics



 
328
<< 12  13  14    16  17  18 >>

log in

join