It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Turkey UFO UPDATE Dr Roger Leir speaks about ET FOOTAGE

page: 23
96
<< 20  21  22   >>

log in

join
share:

posted on Aug, 25 2013 @ 05:12 PM
link   

Originally posted by freelance_zenarchist

Originally posted by funkster4
...but I'd like to go back to the methodology, if I may.

Somebody (maybe ZR) asked for an exemple of the process applied to a "non controversial" picture.
I normally prefer people to do their own homework, but I have a feeling I was being overly optimistic there.


Since you're not using PTM, and instead claim you're using your own image processing methodology, developed independently, something that only you knows how the process is done, how do you expect someone else to "do their own homework"? You're the only person on the planet that knows exactly what you are doing to these images.

The picture you posted of that statue is not yours, and does not show how your method works. So since you've figured out how to use Photobucket and post images and files to the web, how about you post some of your own images showing how you created the image of the EBE pilot.



...the core modality of PTM is the use of interpolations from derivations of an identical data set. That is why, as explained, I presented the featured exemple. I think I made clear enough that it was excerpted from a scientific publication, and why I was showing it. I don't think there is any ambiguity here...

In conventional PTM, there is use of only one variable (lighting) to produce the iterations. In my process, I use the conventional optical settings (lighting, sharpness, contour, texture, etc) to produce the iterations. That is the only operational difference. The process is still identical: you interpolate varying interpretations of the original data set, and you get a tremendous enhancement and noise reduction by mere iteration of the ever-increasing data loop.
It is interesting to note that, though their first experiments were done with sets of only 50 iterations, Malzbenzer et al stated recently (2010) that "the more iterations, the better". As explained, my own approach to this was based on the premise (quite obvious, I thought at the time) that the odds that objective information would manifest itself more frequently and coherently,regardless of how it was interpreted, than random noise would, were quite high.
By merely accumulating data (think Baye's Law here...) you will accumulate knowledge, which is simply data put into perspective. Frequency of manifestation whithin the varying iterations serves as a tremendously powerful filter to sort out noise from real, objective data.

So interestingly, those guys are now corroborating one of the premises of my clumsy methodology: the more information (meaning: varying interpretations) you can interpolate, the better...

I can probably put a complete file on PhotoBucket (that's 107 iterations for Frame 11, for instance), though I would have to organize my stuff here (I have other material there), which would actually be a pain in the ... since, as you remember, I am not computer friendly, and which makes me ask the question to you: what will you do with it, actually? From the tone of previous exchanges, I get the impression that you're approaching this not quite from an objective critic's point of view.

So what I could do with not much hassle at this time, is post a sample of the file: I would suggest you give a specific interval (starting from source, every ...X... iteration: every 5/10/20/etc), or any serial combination you like. I'll will post it here for your review.

In the meantime, I will verify how to upload the complete Frame 11 file to PhotoBucket

edit on 25-8-2013 by funkster4 because: (no reason given)

edit on 25-8-2013 by funkster4 because: correction



posted on Aug, 25 2013 @ 05:20 PM
link   
reply to post by funkster4
 



I don't think there is any ambiguity here...

ambiguity is all there is.



posted on Aug, 25 2013 @ 05:40 PM
link   

Originally posted by ZetaRediculian
reply to post by funkster4
 



I don't think there is any ambiguity here...

ambiguity is all there is.



...Come on, I'm sure you can do better than that...



posted on Aug, 25 2013 @ 05:45 PM
link   

Originally posted by funkster4

Originally posted by ZetaRediculian
reply to post by funkster4
 



I don't think there is any ambiguity here...

ambiguity is all there is.



...Come on, I'm sure you can do better than that...


better than what? an ambiguous process on an ambiguous clip producing ambiguous results mixed in with some ambiguous claims with ambiguous logic? Nope. got me there.



posted on Aug, 25 2013 @ 07:00 PM
link   
You guys are idoits. Its a drone plain and simple



posted on Aug, 26 2013 @ 02:18 PM
link   
Sorry if this has been covered ill be honest I have read most of the thread but I was thinking. Is it possible somewhere that someone can get hold of shipping forecasts for that area and that date the video was filmed. Would give us and idea if any marine traffic was in the area



posted on Aug, 27 2013 @ 03:38 PM
link   

Originally posted by ZetaRediculian

Originally posted by funkster4

Originally posted by ZetaRediculian
reply to post by funkster4
 



I don't think there is any ambiguity here...

ambiguity is all there is.



...Come on, I'm sure you can do better than that...


better than what? an ambiguous process on an ambiguous clip producing ambiguous results mixed in with some ambiguous claims with ambiguous logic? Nope. got me there.


maybve you could expand a bit about why the process is "ambiguous"? I show you that it is used today by experts...



posted on Aug, 27 2013 @ 03:51 PM
link   

Originally posted by Jordan River
You guys are idoits.


Sometimes irony can be pretty ironic.



posted on Aug, 27 2013 @ 04:16 PM
link   
reply to post by funkster4
 


What you're doing is not polynomial texture mapping.


Polynomial texture mapping, also known as Reflectance Transformation Imaging (RTI), is a technique of imaging and interactively displaying objects under varying lighting conditions to reveal surface phenomena.

en.wikipedia.org...


You would need to photograph the object under different lighting conditions, ie. from multiple light sources, all you have in the video is a single light source - the moon.


What you're showing is not polynomial texture mapping.


Typically, PTMs are used for displaying the appearance of an object under varying lighting direction, and specify the direction of a point light source. However, other applications are possible, such as controlling focus of a scene. PTMs can be used as light-dependent texture maps for 3D rendering, but typically are just viewed as ‘adjustable images’.

www.hpl.hp.com...


Instead of interactive images that show the changing light source, your static images look more like degraded .JPG's that have had too many Photoshop Filters applied to them. They are filled with compression artifacts which create all sorts of ambiguous shapes and forms (which you've labeled extraterrestrial pilots).

You're method is just as ambiguous because you are taking a single frame from a low quality video and making "slightly varying versions" . No one but you knows what changes you've made to the images, or if those changes have added anything to image. Since the original image is so small and there's only a few pixels to work with my guess is that they have.



posted on Aug, 27 2013 @ 05:26 PM
link   

Originally posted by funkster4

maybve you could expand a bit about why the process is "ambiguous"? I show you that it is used today by experts...
the results are ambiguous. Lets say I have the best computer in the world and I feed it lots and lots of random data for it to process and it produces something that looks like actual information that might be something interesting but then also looks like yesterdays leftover lunch from the day before but with some pretty colors what we can say about this type of useless information is that its useful only when it needs to fill in space or to pass on to someone else AS IF it were useful so we can feel good about ourselves. However, if I feed it good, solid data, I should get reliable results.



posted on Feb, 12 2018 @ 04:21 AM
link   

Trick of the light: In this perfect example, an atmospheric duct between two layers of warm and cold air bends light so that the yacht appears to be floating above the water level.

This is a mirage.

This could explain why the bridge of a ship above the observer's horizon, could be floating higher in the air. The rest of the ship under the horizon.

Just an observation that still leaves the possibility of these image being the bridges of passing ships.

Fun to tag an old thread, as well.




top topics



 
96
<< 20  21  22   >>

log in

join